A Versatile Model-Based Visibility Measure for Geometric Primitives

Similar documents
Dual Arm Robot Research Report

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles

Kinematic Analysis of a Family of 3R Manipulators

Reflectivity Function based Illumination and Sensor Planning for Industrial Inspection

New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction

Computer Graphics Chapter 7 Three-Dimensional Viewing Viewing

Shift-map Image Registration

Shift-map Image Registration

BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES

CONSTRUCTION AND ANALYSIS OF INVERSIONS IN S 2 AND H 2. Arunima Ray. Final Paper, MATH 399. Spring 2008 ABSTRACT

A Classification of 3R Orthogonal Manipulators by the Topology of their Workspace

Calculation on diffraction aperture of cube corner retroreflector

Classical Mechanics Examples (Lagrange Multipliers)

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means

1 Surprises in high dimensions

CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method

Bends, Jogs, And Wiggles for Railroad Tracks and Vehicle Guide Ways

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body

6 Gradient Descent. 6.1 Functions

Fast Window Based Stereo Matching for 3D Scene Reconstruction

Image compression predicated on recurrent iterated function systems

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Generalized Edge Coloring for Channel Assignment in Wireless Networks

UNIT 9 INTERFEROMETRY

Exercises of PIV. incomplete draft, version 0.0. October 2009

AN INVESTIGATION OF FOCUSING AND ANGULAR TECHNIQUES FOR VOLUMETRIC IMAGES BY USING THE 2D CIRCULAR ULTRASONIC PHASED ARRAY

Section 20. Thin Prisms

Refinement of scene depth from stereo camera ego-motion parameters

Local Path Planning with Proximity Sensing for Robot Arm Manipulators. 1. Introduction

A Neural Network Model Based on Graph Matching and Annealing :Application to Hand-Written Digits Recognition

Large Monochromatic Components in Two-colored Grids

FINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD

Section 19. Thin Prisms

Divide-and-Conquer Algorithms

Additional Divide and Conquer Algorithms. Skipping from chapter 4: Quicksort Binary Search Binary Tree Traversal Matrix Multiplication

Generalized Edge Coloring for Channel Assignment in Wireless Networks

Multi-camera tracking algorithm study based on information fusion

Learning convex bodies is hard

More Raster Line Issues. Bresenham Circles. Once More: 8-Pt Symmetry. Only 1 Octant Needed. Spring 2013 CS5600

Distributed Decomposition Over Hyperspherical Domains

NEW METHOD FOR FINDING A REFERENCE POINT IN FINGERPRINT IMAGES WITH THE USE OF THE IPAN99 ALGORITHM 1. INTRODUCTION 2.

Characterizing Decoding Robustness under Parametric Channel Uncertainty

Animated Surface Pasting

Ad-Hoc Networks Beyond Unit Disk Graphs

A Convex Clustering-based Regularizer for Image Segmentation

STEREOSCOPIC ROBOT VISION SYSTEM

Robust Camera Calibration for an Autonomous Underwater Vehicle

A Survey of Light Source Detection Methods

Parts Assembly by Throwing Manipulation with a One-Joint Arm

State Indexed Policy Search by Dynamic Programming. Abstract. 1. Introduction. 2. System parameterization. Charles DuHadway

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem

PHOTOGRAMMETRIC MEASUREMENT OF LINEAR OBJECTS WITH CCD CAMERAS SUPER-ELASTIC WIRES IN ORTHODONTICS AS AN EXAMPLE

Holy Halved Heaquarters Riddler

4.2 Implicit Differentiation

Object Recognition Using Colour, Shape and Affine Invariant Ratios

A half-scan error reduction based algorithm for cone-beam CT

Lecture 1 September 4, 2013

A Plane Tracker for AEC-automation Applications

SURVIVABLE IP OVER WDM: GUARANTEEEING MINIMUM NETWORK BANDWIDTH

Using Vector and Raster-Based Techniques in Categorical Map Generalization

X y. f(x,y,d) f(x,y,d) Peak. Motion stereo space. parameter space. (x,y,d) Motion stereo space. Parameter space. Motion stereo space.

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015)

Computer Graphics Inf4/MSc. Computer Graphics. Lecture 6 View Projection Taku Komura

Fast Fractal Image Compression using PSO Based Optimization Techniques

Approximation with Active B-spline Curves and Surfaces

Non-homogeneous Generalization in Privacy Preserving Data Publishing

A new fuzzy visual servoing with application to robot manipulator

Tracking and Regulation Control of a Mobile Robot System With Kinematic Disturbances: A Variable Structure-Like Approach

A Cost Model For Nearest Neighbor Search. High-Dimensional Data Space

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 31, NO. 4, APRIL

EXACT SIMULATION OF A BOOLEAN MODEL

Polygon Simplification by Minimizing Convex Corners

Comparative Study of Projection/Back-projection Schemes in Cryo-EM Tomography

A multiple wavelength unwrapping algorithm for digital fringe profilometry based on spatial shift estimation

B-Splines and NURBS Week 5, Lecture 9

Vision-based Multi-Robot Simultaneous Localization and Mapping

Architecture Design of Mobile Access Coordinated Wireless Sensor Networks

Coupling the User Interfaces of a Multiuser Program

Loop Scheduling and Partitions for Hiding Memory Latencies

Implicit and Explicit Functions

Physics INTERFERENCE OF LIGHT

Unknown Radial Distortion Centers in Multiple View Geometry Problems

Impact of changing the position of the tool point on the moving platform on the dynamic performance of a 3RRR planar parallel manipulator

arxiv: v1 [math.co] 15 Dec 2017

Animação e Visualização Tridimensional. Collision Detection Corpo docente de AVT / CG&M / DEI / IST / UTL

Skyline Community Search in Multi-valued Networks

Algebraic transformations of Gauss hypergeometric functions

Investigation into a new incremental forming process using an adjustable punch set for the manufacture of a doubly curved sheet metal

PERFECT ONE-ERROR-CORRECTING CODES ON ITERATED COMPLETE GRAPHS: ENCODING AND DECODING FOR THE SF LABELING

A Comparative Evaluation of Iris and Ocular Recognition Methods on Challenging Ocular Images

Optimal path planning in a constant wind with a bounded turning rate

Tight Wavelet Frame Decomposition and Its Application in Image Processing

Module13:Interference-I Lecture 13: Interference-I

Dense Disparity Estimation in Ego-motion Reduced Search Space

The Reconstruction of Graphs. Dhananjay P. Mehendale Sir Parashurambhau College, Tilak Road, Pune , India. Abstract

Online Appendix to: Generalizing Database Forensics

MORA: a Movement-Based Routing Algorithm for Vehicle Ad Hoc Networks

Digital fringe profilometry based on triangular fringe patterns and spatial shift estimation

Advanced method of NC programming for 5-axis machining

Transcription:

A Versatile Moel-Base Visibility Measure for Geometric Primitives Marc M. Ellenrieer 1,LarsKrüger 1, Dirk Stößel 2, an Marc Hanheie 2 1 DaimlerChrysler AG, Research & Technology, 89013 Ulm, Germany 2 Faculty of Technology, Bielefel University, 33501 Bielefel, Germany Abstract. In this paper, we introuce a novel moel-base visibility measure for geometric primitives calle visibility map. It is simple to calculate, memory efficient, accurate for viewpoints outsie the convex hull of the object an versatile in terms of possible applications. Several useful properties of visibility maps that show their superiority to existing visibility measures are erive. Various example applications from the automotive inustry where the presente measure is use successfully conclue the paper. 1 Introuction an Motivation Fining viewpoints from which certain parts of a known object are visible by a two-imensional, perspective sensor is not a straightforwar task. Especially in complex environments like inustrial inspection or object recognition applications, where several geometric primitives (often calle features [11]) on complex objects have to be inspecte, fining an easy to calculate, versatile, storageeffective an not too simplifying visibility measure is of great importance. Up until now, several visibility measures with their particular avantages an isavantages have been evelope. However, most of them are tailor-mae for specific applications so that the unerlying algorithmic an conceptual framework cannot be use in other tasks. At the same time, they o not provie a consistent moeling of visibility for ifferent feature-types like points, lines, polygons or volumetric features. Also, feature covisibility is aresse by only a few of the many existing visibility measures. Restricting the sensor positions to very few viewpoints further limits possible applications. One of the first visibility measures for surface areas is the aspect graph by Koenerink et.al. [8]. Aspect graphs assume the object s moel to be centere at the origin of a sphere which encapsulates the entire moel. Every possible view of the moel can then be represente as a point on that sphere. Each equivalent view on the sphere is represente by a noe in the aspect graph, with the connection between graph noes representing a possible transition between views. These transitions are ue to changes in the occlusion relationships between object surfaces. However, no metho whatsoever of generating the threeimensional viewpoint region of an object s aspect graph noe is presente in H. Kalviainen et al. (Es.): SCIA 2005, LNCS 3540, pp. 669 678, 2005. c Springer-Verlag Berlin Heielberg 2005

670 M.M. Ellenrieer et al. this paper. In [2], Cowan an Kovesi show a way to actually generate an aspect graph, although only for simple polyhera. They escribe a metho to calculate the three-imensional region where a convex polyheral object O occlues a portion of a convex polyheral surface S. The bounary of the occlusion zone is escribe by a set of separating support-planes. In the case of multiple occluing objects, the union of all non-occlusion zones is calculate. The presente algorithm has quaratic computational complexity in the number of eges of the polygon. This is of significant isavantage in complex scenes an real applications. Tarabanis et.al. [11] have presente a metho of computing the spacial visibility regions of features. They efine a feature as a polygonal an manifol subset of a single face of a polyheral object. The visibility region of a feature is efine as the open an possibly empty set consisting of all viewpoints in free space for which the feature is visible in its entirety. Instea of calculating the visibility region irectly, Tarabanis et.al. evise a three-step algorithm that calculates the occlusion region of a feature T in linear time (in terms of object vertices). The occlusion region is the complementary area to the visibility region with respect to free space. For each element of a subset L of the faces of the polyheral object, the (polyheral) occluing region is calculate in a similar manner to the metho shown by Cowan an Kovesi [2]. The elements of L are those faces that satisfy certain topological an morphological properties with respect to T. The occluing regions of all elements of L are merge into the complete polyheral occlusion region O of the feature T. A check for visibility of T from a certain viewpoint can thus be reuce to a point-in-polyheron classification. However, since the polyheral occlusion region O has to be store as a whole, the presente metho requires a consierable amount of storage memory. This makes it ifficult to employ in scenarios with highly complex parts. Another metho of calculating the approximate visibility space of a feature is presente by Trucco et.al. [12]. Their metho restricts the possible sensor viewpoints to positions at manually fixe intervals on a spherical gri surrouning the object. The viewing irection at each gri point connects the viewpoint with the center of the spherical gri. Visibility of a feature is etermine by renering the object from each viewpoint an counting the number of pixels of the feature in the resulting image. Covisibility, i.e. the visibility of several features at once, can be etermine by counting the collective number of pixels. An avantage of this approach is that it yiels a quantitative an not just boolean visibility measure for each viewpoint. Also, in terms of storage memory, the approximate visibility space is very efficient. However, the restriction to a spherical gri, an the high computational complexity for the renering process limits its use to rather simple inspection tasks. Various other visibility measures exist in the literature. Some publications aress visibility in terms of a scalar function V that is evaluate for each viewpoint. Khawaja et.al. [7] use the number of visible features, the number of visible mesh-faces on each feature, an the number of image pixels associate with each face as parameters of an a-hoc postulate formula. The necessary

A Versatile Moel-Base Visibility Measure for Geometric Primitives 671 parameters are generate for each viewpoint by renering the moel of the inspecte object from this viewpoint. Other publications, e.g. [1], use even simpler methos: the ot-prouct of the viewing irection an the surface normal of the inspecte feature. If the ot-prouct is negative, the feature is consiere to be visible. Hence, visibility can only be etermine correctly for strictly convex objects. As we have seen, existing visibility measures o not account for all of the points mentione above. Especially the lack of versatility concerning both the variety of possible applications an the correctness of the visibility etermination for 1, 2 an 3 features is apparent. In the remainer of this paper, we therefore want to introuce the concept of visibility maps to etermine the visibility of arbitrary geometric primitives. We will show some of their properties an emonstrate their versatility in various machine vision applications. 2 Visibility Maps The term visibility map is very generic. It is use for example in computer graphics as a synonym for a graph characterizing the visible triangles of a mesh. In this paper, we use the term to escribe a matrix that is use to etermine the visibility of a geometric primitive. In principle, a visibility map is efine for points on the surface of an object. It is calculate by projecting the inspecte object (an possibly the whole scene) onto a unit sphere centere at the point on the object for which the map is calculate. The unit sphere is then sample at constant azimuth / elevation intervals ν an the boolean information whether something has been projecte on the current point on the sphere or not, is transcribe into a matrix calle visibility map. Fig. 1 illustrates this concept. Visibility map 20 A A elevation in egrees 40 60 80 100 120 140 C B D unit sphere B 160 180 50 100 150 200 250 300 350 azimuth in egrees D C two imensional feature area example point Fig. 1. Visibility map of a point on a flange. The same point on the right arm of the flange is highlighte. Black regions in the visibility map represent viewing irections, where a camera positione at the center of the sphere woul see parts of the flange. The position of four example viewing irections (A-D) are marke in the map. For illustration purposes, the size of the unit sphere has been exaggerate

672 M.M. Ellenrieer et al. 2.1 Calculating Visibility Maps Calculating visibility maps can be implemente effectively, if the object surface geometry is given as a triangulate mesh with three-imensional vertex coorinates v i. The vertices are projecte onto a unit sphere centere at the point whose visibility has to be calculate. Without loss of generality, we assume this point to be the origin of the coorinate system. Using the four-quarant arcustangent function the vertices spherical coorinates θ (azimuth), φ (elevation) an r (raius) result to θ (ṽ i ) = arctan 2 (v i,y,v i,x ), φ (ṽ i )= π ) (v 2 arctan 2 i,z, vi,x 2 + v2 i,y, an (1) r (ṽ i ) 1. Suppose, two vertices v i an v j are connecte by a mesh-ege. Then, the meshege is sample at k intervals. The sample ege points are then projecte onto the sphere an an approximate spherical mesh-triangle is constructe by connecting the projecte ege samples using Bresenham s algorithm. The resulting triangle-outline is fille using a stanar floo-fill algorithm. This process is repeate for every triangle of the object. Afterwars, the unit sphere is sample in both azimuth an elevation irection at intervals ν an the result whether something has been projecte or not is transcribe into the matrix M, i.e. the visibility map. To account for numerical errors an to get a smooth visibility map we further apply stanar ilation / erosion operators to M. The computational complexity of this metho is O(n) for objects comprise of n triangles. An example visibility map can be seen in Fig. 1. 2.2 Distance Transform Sometimes, it can be useful to quickly etermine viewing irections where the visibility of features is as stable as possible towars slight viewpoint eviations. This means, the sensors shoul be positione as far away as possible from the occlusion-zone bounaries. One way to achieve this is to calculate the istance transform of a visibility map. However, since visibility maps are spherical projections, we nee to use a spherical rather than an Eucliean istance measure. Using the Haversine function h(x) =sin 2 (x/2), the istance of two points p 1 =(θ 1,φ 1 )anp 2 =(θ 2,φ 2 ) on the unit sphere can be expresse as (p 1,p 2 )=2 arctan 2 ( a, 1 a ), (2) where a = h (θ 2 θ 1 )+cos(θ 1 )cos(θ 2 ) h (φ 2 φ 1 ). By convention, we efine <0 for visible viewing irections an >0 for occlue irections. The actual istance transformation is calculate using stanar propagation algorithms [3] which efficiently exploit the transitivity of the minimum relation.

A Versatile Moel-Base Visibility Measure for Geometric Primitives 673 /2 p 20 α α+ α h elevation in egrees 40 60 80 100 120 140 160 180 50 100 150 200 250 300 350 azimuth in egrees Fig. 2. Geometric relations for 2 feature visibility (left) an full feature visibility map (5) of the polygonal area shown in Fig. 1. The ifferent corner point visibility maps are shown for illustration purposes 2.3 Visibility of 2D an 3D Features In many applications a visibility measure for 2D or even 3D features is require. By concept however, the visibility map is efine for one single point on the surface of an object. Nevertheless, it can be generalize for higher-imensional geometric primitives, i.e. lines, polygonal areas, an volumes, if it is assume that the camera is far away at istance h in comparison to the maximum extension (i.e. the length of the longest eigenvector) of the primitive. This is equal to the assumption of using the same camera irection (azimuth / elevation) for each of the N corner points of the primitive. The resulting error is epening on the relation /h an can be estimate as follows: let p bea3d-pointthatis projecte onto two unit spheres at locate at a istance. Figure 2 shows the geometric relations in the plane spanne by the sphere centers an p. The point is projecte onto the first sphere at an elevation angle α an at elevation α+ α onto the secon. It is clear, that for any fixe h, α max if p is locate in the mile of the spheres at /2. We have ( 2h α = arctan ) arctan ( ) 2h. (3) In orer to get a value of α that is smaller than the angular spacing ν of the visibility map, we nee to fin α ( h) <ν.from(3)weget h 1 2 tan ν 2. (4) For a typical sampling interval of ν = 1, this results to /h 0.004. For higher imensional primitives, there are basically two notions of visibility: full visibility an partial visibility. In inustrial inspection applications, full visibility of a feature is often require, e.g. to check for complete visibility of bar coes. In other applications, e.g. object recognition applications, partial visibility might however be sufficient. For primitives, where full visibility is require, it can be calculate by forming a union of the visibility maps of each corner point,

674 M.M. Ellenrieer et al. leaing to full feature visibility maps M = N 1 k=0 M k,corner. (5) This concept is applicable for 2D polygonal areas, e.g. single meshes, or parts of 3D primitives, e.g. several sies of a cube. It can be further extene to the covisibility of several features, resulting in the combine feature visibility map M. If full visibility is not require or possible, e.g. in case of three-imensional primitives, (5) will not yiel a correct visibility measure. Nevertheless, the visibility map can also be use, if the visibility maps of the primitives vertices M k,corner are combine into partial feature visibility maps by M = N 1 k=0 2 k 1 M k,corner. (6) Then, the visibility of each vertex can be evaluate separately. Using the istance transform of the visibility maps of two vertices p 1 an p 2 (2)itisalsopossibleto calculate the visible length vis of a mesh-ege connecting these vertices. Figure 3 shows the geometric relations in the plane E spanne by p 1, p 2 an viewpoint v. The plane cuts a great circle from both visibility maps. All angles an istances use in the following are measure in this plane. For the sake of simplicity, we efine a coorinate system with unit vectors x E an y E originating in p 1. Then, the x E -y E components of k are given by ( k xe = 1 tan (α ) 1 1 + δ 1 ), k ye tan (α 2 + δ 2 ) = tan (α 1 + δ 1 ). (7) Angles α k an δ k can be irectly rawn from the visibility maps an their istance transforms. Calculating the intersection of g 3 an the x E -axis results in vis () =(v xe v ye ) v x E k xe. (8) v ye k ye Here, v xe an v ye escribe the coorinates of the viewpoint projecte onto the plane E. If applie to each mesh-ege, (8) allows to calculate the visible area or volume of the primitive irectly from the visibility maps (Fig. 3). In viewpoint planning applications this property can be use to irectly assign a quantitative visibility value to a viewpoint. 2.4 Visibility Ratio an Memory Efficiency By using visibility maps, it is possible to estimate the size of the space, from which one or more features are completely visible. This can be use to etermine, whether e.g. a sensor hea mounte on a robotic arm can be positione such that certain primitives are visible. For this, we efine the term visibility ratio of a single feature F j : V (F j )= visible area of M(F j) total area of M(F j ). (9)

A Versatile Moel-Base Visibility Measure for Geometric Primitives 675 g2 v g1 ye p1 α1 δ1 occlue angles g3 p2 k 0000000000 1111111111 000000000 111111111 occluing object δ2 00000000 11111111 00000000 11111111 occlue angles 0000000 1111111 000000 111111 00000 11111 α2 00000 11111 0000 1111 xe p2 vis p1 vis Fig. 3. The geometric relations in the plane spanne by p 1, p 2 an viewpoint v (left) for calculating the visible area of a partially visible mesh facet (right). All units are measure in this plane. Angles δ 1,2 are erive from the visibility maps istance transforms (2) This property can be interprete as the relation of the visible soli angle of the the feature to the full sphere. Equally, we efine the combine feature visibility ratio V (C i ) of the features associate to camera i by using the combine feature visibility map M(C i ). It has to be note that for full feature visibility maps V < V 0.5 for all two-imensional primitives other than a single line, since they are on the surface of the inspecte objects an are thus not visible from viewpoints on the backsie of the object. One can therefore assume that there is at least one large connecte region in the visibility map. Hence, visibility maps can be store very effectively by using simple run-length-encoing as a means to compress them. 3 Applications Using Visibility Maps To show the versatility of the visibility map, we are going to present several systems using the presente concepts in various applications from object recognition to viewpoint planning. In our opinion, this versatility together with the simplicity of its concept reners the visibility map superior to other existing visibility measures. 3.1 Object Recognition an Pose Estimation One of the first applications, where visibility maps have been use, was object recognition. In [9] a system for multi-feature, multi-sensor classification an localization of 3D objects in 2D image sequences is presente. It uses a hypothesizean-test approach to estimate type an pose of objects. Characteristic Localize Features (CLFs), e.g. contours, 3D corners, etc., are extracte from the geomet-

676 M.M. Ellenrieer et al. Fig. 4. From left to right: An oil cap, its CLF graph seen from two ifferent viewpoints, aligne to the image. Visibility of the CLFs was calculate using visibility maps ric moels of the ifferent objects an viewpoint epenant graphs of the CLFs projecte onto the image plane are generate for each pose an object type hypothesis. By using elastic graph matching algorithms [6], the graph is aligne with the features in the image. Viewpoint epenant graph renering is only possible, since visibility of each CLF was calculate using the visibility map. The system is use for both optical inspection for quality control an airport groun-traffic surveillance. An example CLF-graph an the recognize object type an pose is shown in Fig. 4. Since there are typically several hunres of CLFs per object whose visibility has to be evaluate several times, both storage memory an spee of the employe visibility measure are crucial. 3.2 Optimal Sensor-Feature Association The cost of automatic inspection systems for quality control irectly epens on the number of installe sensors. Clearly, more than r sensors for r features is therefore not an option, less, i.e. k<r, sensors woul even be better. For r feature areas, there are 2 r 1 possible feature combinations that can be assigne to one sensor. To fin the optimal association, we nee to efine a criterion that compares ifferent combinations. The combine feature visibility map s visibility ratio can be use for fining an optimal assignment matrix C of size k r with k r whose column-sums equal to 1 an whose elements C ij are 1, if feature j is associate to camera i. UsingarowvectorC i that represents the associate features of camera i, theweighte visibility ratio Ṽ of an assignment matrix C with k rows Ṽ (C) = 1 k V (C i ) (10) k to compare all possible assignments is introuce in [4]. Base on this equation, an algorithm to fin the optimal sensor-feature association matrix C opt with a minimum number of sensors is also presente. i=1 3.3 Entropy Base Viewpoint Selection Stel et al. [10] exten the notion of CLFs to an entropy base viewpoint selection metho. There, the best viewpoints for istinguishing ifferent aggregate moels, i.e. ifferent bolt-nut combinations (Fig. 5), an their respective 2D

400 300 200 100 400 0 100 200 300 300 200 100 0 100 200 300 400 400 300 200 100 0 100 200 300 A Versatile Moel-Base Visibility Measure for Geometric Primitives 677 Fig. 5. Two ifferent nut-bolt aggregations (left / mile). The entropy inex mappe into visibility maps accoring to [10] etermines the 20 best (light lines) an worst viewing irections (bol lines from above / beneath) to istinguish them (right) Fig. 6. Simple four step algorithm to fin a goo initial viewpoint using visibility maps projections are calculate. The collective entropy mappe onto a visibility map is use to erive a istinction measure for ifferent viewing irections. High entropy etermines ba viewing irections, low entropy values mark goo ones. 3.4 Optimal Camera an Illumination Planning Another application using visibility maps is presente in [4]. There, the optimal viewpoints for inustrial inspection tasks are calculate from a geometric moel of the inspecte objects (Fig. 6). The istance transform of the visibility map allows to automatically position the sensor in a viewing irection that is far from vanishing points in the viewing volume. Further refinements regaring aitional viewpoint requirements, e.g. minimum resolution or viewing angle, are implemente using convex scalar functions epenant on the viewpoint. Using the istance transformation of the feature visibility maps, a scalar cost can be assigne to each six-imensional viewpoint. The final, optimal viewpoints are foun by minimizing the scalar cost functions. Similarly, the visibility map can be use to fin an illumination position [5], from which a esire feature illumination conition, e.g. specular or iffuse reflection, can be observe. 4 Summary an Conclusion We have presente a versatile moel-base visibility measure for geometric primitives calle visibility map. It is easy to calculate, memory efficient, quick to use, an provies an accurate way of etermining the visibility of 1, 2, or 3 geometric primitives of triangulate meshes from viewpoints outsie the convex hull

678 M.M. Ellenrieer et al. of the whole object. Several applications that prove the versatility an usefulness of this concept for object recognition, pose estimation, as well as sensor an illumination planning are presente. Since a visibility map has to be calculate only once, a test for visibility from a specific viewing irection comprises only a table lookup. In our opinion, this versatility together with the simplicity of its concept reners the visibility map superior to other existing visibility measures. One shortcoming of the visibility map, however, is the fact that visibility can only be etermine correctly for viewpoints outsie of the complex hull of the object. References 1. S. Y. Chen an Y. F. Li, A metho of automatic sensor placement for robot vision in inspection tasks, in Proc. IEEE Int. Conf. Rob. & Automat., Washington, DC, May 2002, pp. 2545 2550. 2. C. Cowan an P. Kovesi, Automatic sensor placement from vision task requirements, IEEE Trans. Pattern Anal. Machine Intell., 10 (1988), pp. 407 416. 3. O. Cuisenaire, Distance transformations: fast algorithms an applications to meical image processing, PhD thesis, Univ. cath. e Louvain, Belgium, Oct. 1999 4. M. M. Ellenrieer an H. Komoto, Moel-base automatic calculation an evaluation of camera position for inustrial machine vision, in Proc. SPIE Computational Imaging III. 2005. 5. M. M. Ellenrieer et al., Reflectivity Function base Illumination an Sensor Planning for inustrial inspection, Proc. SPIE Opt. Metrology Symp., Munich, 2005 6. E. Kefalea, O. Rehse, an C. v.. Malsburg, Object Classification base on Contours with Elastic Graph Matching, Proc. 3r Int. Workshop Vis. Form, 1997 7. K. Khawaja et al., Camera an light placement for automate visual assembly inspection, in Proc. IEEE Int. Conf. Robotics & Automation, Minneapolis, MN, April 1996, pp. 3246 3252. 8. J. J. Koenerink an A. J. van Doorn, The internal representation of soli shape with respect to vision, Biol. Cybern., 32 (1979), pp. 151 158. 9. T. Klzow an M. M. Ellenrieer, A general approach for multi-feature, multi-sensor classification an localization of 3 objects in 2 image sequences, in Proc. SPIE Electronic Imaging Conf., vol. 5014, 2003, pp. 99 110. 10. D. Stel et al., Viewpoint selection for inustrial car assembly, in Springer LNCS., vol. 3175 - Proc. 26 th DAGM Symp. 2004, pp. 528 535. 11. K. A. Tarabanis et al., Computing occlusion-free viewpoints, IEEE Trans. Pattern Anal. Machine Intell., 18 (1996), pp. 279 292. 12. E. Trucco et al., Moel-base planning of optimal sensor placements for inspection, IEEE Trans. Robot. Automat., 13 (1997), pp. 182 194.