Baseline Detection and Localization for Invisible Omnidirectional Cameras

Size: px
Start display at page:

Download "Baseline Detection and Localization for Invisible Omnidirectional Cameras"

Transcription

1 International Journal of Computer Vision 58(3), , 2004 c 2004 Kluwer Academic Publishers. Manufactured in The Netherlands. Baseline Detection and Localization for Invisible Omnidirectional Cameras HIROSHI ISHIGURO Department of Computer & Communication Sciences, Wakayama University, Japan ishiguro@sys.wakayama-u.ac.jp TAKUSHI SOGO Department of Social Informatics, Kyoto University, Japan MATTHEW BARTH Department of Electrical Engineering, University of California, Riverside, USA Received January 7, 2002; Revised October 30, 2003; Accepted November 5, 2003 Abstract. Two key problems for camera networks that observe wide areas with many distributed cameras are self-localization and camera identification. Although there are many methods for localizing the cameras, one of the easiest and most desired methods is to estimate camera positions by having the cameras observe each other; hence the term self-localization. If the cameras have a wide viewing field, e.g. an omnidirectional camera, and can observe each other, baseline distances between pairs of cameras and relative locations can be determined. However, if the projection of a camera is relatively small on the image of other cameras and is not readily visible, the baselines cannot be detected. In this paper, a method is proposed to determine the baselines and relative locations of these invisible cameras. The method consists of two processes executed simultaneously: (a) to statistically detect the baselines among the cameras, and (b) to localize the cameras by using information from (a) and propagating triangle constraints. Process (b) works for the localization in the case where the cameras are observed each other, and it does not require complete observation among the cameras. However, if many cameras cannot be observed each other because of the poor image resolution, it dose not work. The baseline detection by process (a) solves the problem. This methodology is described in detail and results are provided for several scenarios. Keywords: omnidirectional camera, distributed vision, invisible camera, identification, localization, baseline detection, triangle constraint, constraint propagation. Introduction In recent computer vision research, a number of multiple camera system applications and approaches have been proposed. One of the most popular applications of a multiple camera system is to monitor humans and other moving objects. Several monitoring systems have been developed in the VSAM project sponsored by DARPA of the USA (Vsam, 200; Collins, 999). The basic strategy that detects moving regions in images by background subtraction and tracks them using calibrated cameras is not particularly novel comparing with previous work. However, progress has been made in that the systems are far more robust through the use of contextual information (Medioni et al., 200). As studied in VSAM and other similar projects in the world, the most important aspect of a multiple camera system is to monitor a wide area. Previously, the authors have proposed a distributed omnidirectional vision system as a new information infrastructure for

2 20 Ishiguro, Sogo and Barth monitoring dynamic worlds (Ishiguro, 997; Ishiguro and Nishimura, 200). The wide viewing field of the omnidirectional camera is suitable for monitoring tasks that require observing targets from various viewing directions. We have developed a realtime human tracking system which covers a wide area with a relatively small number of omnidirectional cameras. One of the key problems in the multiple camera systems is camera localization. In the wide-area monitoring systems where cameras are widely distributed, it is not easy to precisely measure the location by hand. Therefore, the systems that observe wide areas with many distributed cameras need a better, preferably automatic, method for camera localization. This paper proposes the automatic camera localization. Before discussing on the method, let us overview previous approaches for camera calibration and localization for multiple camera systems. Jain and his colleagues used a well-known method for camera localization and calibration (Jain and Wakimoto, 995; Boyd et al., 998). In this work, a target is used that can be observed from all cameras in the system. Torr and Murray developed a more robust and elegant method for wide-baseline stereo calibration problem (Torr and Murray, 997). By using the precisely calibrated cameras, Kanade developed a multiple camera system that provides the best view of players in an American football stadium (Eyevision, 200). In these research approaches, their purpose was to observe a relatively small area with multiple cameras. Thus, all cameras can observe a common target for the calibration process. On the other hand, our purpose of this paper is to find positions widely distributed cameras for monitoring the wide area and not for reconstructing precise geometry of targets. The real-time human tracking is one of the applications. In addition to that, it is not so difficult to place the cameras at the same height in such the applications. Therefore, the problem we should solve is to find camera positions in the wide 2-D space. The problem cannot be considered as general camera calibration but rather a labeling problem among cameras. Let us consider the problem we solve in this paper again. The system consisting of many cameras distributed in a wide space monitors dynamic events. For this purpose, an omnidirectional camera that has a wide viewing field will be an ideal imaging device. Thus, the problem is considered as a localization problem of omnidirectional cameras in a distributed omnidirectional vision system (Section 2) where the cameras placed in the same height. A simple method for automatic localization is by having the cameras that observes each other and finding the locations from the angles between cameras. Suppose there are three cameras A, B, and C observing each other. Each camera has two projections of the other cameras in the image. For example, camera C has two projections a and b of camera A and B. From the distance between a and b, the angle AC-AB can be measured. Each camera measures the angle between other cameras in the same way. From the acquired angles, the triangle ABC can be determined up to a scale factor. In this paper, an automatic method is proposed for localizing cameras based on this simple idea. The fundamental idea is simple but it is not easy to apply in a real-world multiple camera system. In a real-world system, two key difficulties arise:. In general, it is difficult to distinguish the camera projections when the system consists of identical cameras. Even if the camera observes all the projections of the other cameras, it is difficult to identify which camera is which since they all have the same visual features. In this paper, we refer to this as the identification problem. 2. A more serious problem is that today s cameras are becoming increasingly smaller due to progress in CCD technology and associated circuitry. As a result, we cannot expect that all cameras are observable in the other camera projections. We call this the invisible camera problem. Basically, we can decide the camera positions by solving the identification problem if the cameras observe each other, but we need to deal with the invisible camera problem if the camera projections on the images are very small. Therefore, we deal with the invisible camera problem and show how to find the baseline directions (directions to other cameras) in the images. In order to solve this problem, the system observes moving objects in the environment. If we detect three baselines among three cameras by solving the invisible camera problem, we can decide the positions up to a scale factor. However, in the case where many cameras exist and the cameras observe each other, an algorithm that

3 Baseline Detection and Localization for Invisible Omnidirectional Cameras 2 solves identification problem efficiently decides the camera positions. That is, two algorithms for the invisible camera problem and identification problem are executed simultaneously and the algorithm for the invisible camera problem gives supplemental information to the algorithm for the identification problem. Let us summarize the basic assumptions in this paper here. () The system consists of many omnidirectional cameras. (2) The cameras are placed at the same height. (3) Many of the cameras observe each other (4) There are cameras that cannot be observed each other because of the poor image resolution. (5) There are moving objects in the environment. These assumptions are enough reasonable for practical systems discussed in the next section. By solving the problems based on the assumptions, we can realize robust localization in the distributed omnidirectional vision system consisting of many small omnidirectional cameras. In the following sections, we propose methods to solve the invisible camera problem (Section 3) and the identification problem (Section 4). 2. Distributed Omnidirectional Vision Systems The Internet has changed the world rather significantly. Many distributed computers connected in various places are enhancing human communication abilities. On many of these computers, cameras can be connected for monitoring purposes, leading to a distributed vision system. As the next generation Internet unfolds, it is considered that each computer will have the ability to acquire real world information via cameras that will more tightly couple virtual worlds with the real world. The distributed omnidirectional vision system used in our experimentation is a testbed of such a nextgeneration Internet system. The basic concept of this system and fundamental problem definitions were proposed in Ishiguro (997). Generally speaking, it is not trivial to develop a computer vision system that can be used in the real-world. The sensor data are noisy and the environment changes readily. One approach to solve this problem is to use many cameras each of which executes simple and robust vision tasks. Complex vision Figure. Low-cost and compact omnidirectional camera (it includes a C-MOS CCD camera on the bottom). tasks are realized by integrating the cameras through the network. The omnidirectional camera shown in Fig. is a key device of the distributed vision system. It has a wide visual field and is an ideal camera for both observing wide areas and automatically localizing the cameras. Standard recti-linear cameras have a limited visual field of about degrees. As a result, the arrangements of these cameras in a multiple camera system are rather restricted if they are to observe each other. Omnidirectional cameras do not have this restriction. They can observe each other in any direction. Based on this fundamental idea and by using omnidirectional cameras as the key sensors, we have developed various distributed vision systems. One of them is the robust and real-time human tracking system shown in Fig. 2. Sixteen omnidirectional cameras distributed in the room track multiple humans simultaneously by communicating with each other. In the development of the system, we have acquired camera positions by manually measuring the projections in the cameras. However, this measurement process takes a long time and is rather tedious. Similar camera appearances are confusing. Further, it is not easy to find projections of the cameras because of the small size of the sensors. As wider areas are covered with additional sensors, an automatic localization method is needed. This paper contributes to solving this fundamental problem for distributed omnidirectional vision systems.

4 22 Ishiguro, Sogo and Barth Figure 2. Distributed omnidirectional vision system for tracking multiple humans in real time. 3. Solution for the Invisible Camera Problem Statistical Estimation of the Baseline Directions Among the Cameras As described in Section 2, the distributed omnidirectional vision systems have difficulty in measuring camera positions, due to the invisible camera problem. As a solution to the problem, this section proposes an estimation method of the baseline directions among the cameras. The camera positions are computed from the baseline directions. 3.. Fundamental Idea Figure 3 illustrates the fundamental idea of the proposed method. In Fig. 3, there are two omnidirectional cameras. When there is an object moving among them and it passes the points a, b, and c, cameras and 2 observe it in the same azimuth angle. On the other hand, when the object passes the points d, e, and f, camera observes it in the same azimuth angle, however, camera 2 observes it in different azimuth angles. Thus, the object passing the baseline between two cameras is always projected onto the camera views in the same azimuth angle. Assuming that the object randomly moves among the cameras, the baseline direction can be estimated by memorizing pairs of azimuth angles of the object projections in each camera view, and by checking the pairs that are obtained relatively many times Algorithm for Statistical Estimation Based on the above discussion, the baseline directions among cameras are statistically estimated. Assuming that the cameras may be accidentally moved in the real-world environment, the proposed method dynamically estimates the baseline directions based on dynamic (i.e., real-time) information obtained by observing moving objects as follows. Each camera simultaneously observes objects and determines the azimuth angle to each object: d, d 2,...,d m,...,d M, Figure 3. Fundamental idea for baseline estimation. d 2, d2 2,...,d2 m 2,...,d 2 M 2, ()... d N, d N 2,...,d N m N,...,d N M N

5 Baseline Detection and Localization for Invisible Omnidirectional Cameras 23 where N is the number of cameras, and M i is the number of objects observed by camera i. dm i i is the azimuth angle of the m i -th object observed by camera i (represented in camera i s local coordinates). Note that the number of detected objects may differ according to cameras. Then, every pair of azimuth angles is considered: ( d, d 2 ) (, d, d2 2 ) (,..., d m, dm 2 ) ( 2,..., d M, dm 2 ) 2, ( d, d 3 ) (, d, d2 3 ) (,..., d m, dm 3 ) ( 3,..., d M, dm 3 ) 3,... (2) ( d i, d j ) (, d i, d j ) ( 2,..., d i mi, dm j ) ( j,..., d i Mi, d j ) M j,... ( d N, d N ) (, d N, d2 N ) (,..., d N m N, dm N ) N,..., ( d N M N, dm N ) N In general, these pairs are represented as p i, j,mi,m j = (d i m i, d j m j ). The estimation algorithm memorizes these pairs as possible baseline directions, with initial reliability. By iterating observation, a number of pairs of azimuth angles are obtained, and the reliability of each pair is increased or decreased. Finally, only the pairs with a high reliability are considered as correct baseline directions. The detailed process is as follows (r initial, r inc, r dec, T reliable, and T unreliable are predetermined constants): Step. Obtain azimuth angle pairs p i, j,mi,m j = (d i m i, d j m j )byobservation as described above. Step 2. Initialize P unreliable with P: P unreliable P where P is a set of possible pairs of baseline directions obtained in the previous estimation process. Note that P is empty in the beginning of the estimation process. P unreliable is a set of pairs of azimuth angles that are considered as unreliable baseline directions. As shown above, all of the pairs in P are considered as unreliable at this moment. Step 3. Compare every azimuth angle pair p i, j,mi,m j = (d i m i, d j m j ) with the elements in P: (a) If both of the azimuth angles of p k (an element in P) are equal to dm i i and dm j j of p i, j,mi,m j (this case corresponds to the positions a, b, and c in Fig. 3), p k is considered as a correct pair of baseline directions. The reliability of p k is increased by r inc, and p k is removed from P unreliable. (b) If one of the azimuth angles of p k is equal to d i m i or d j m j (this case corresponds to the positions d, e, and f in Fig. 3), p k is considered as a wrong pair of baseline directions. p k is left in P unreliable. (c) If no element in P matches the above conditions, p i, j,mi,m j is considered as a new possible pair of baseline directions, and added to P with an initial reliability r initial. Step 4. With respect to elements included in P unreliable, decrease the reliability of corresponding elements in P by r dec.ifthe reliability becomes smaller than a threshold T unreliable, remove the element from P. Whenever the cameras observe the objects, the above steps are performed to update the reliability. Finally, the elements in P whose reliability is greater than a threshold T reliable are considered as correct baseline directions. When comparing azimuth angles in Step 3, an azimuth angle α is considered as equal to β if α = β or α = β ± π (i.e., α is opposite to β), since an object on the baseline may be located at three different positions with respect to the cameras in omnidirectional stereo as shown in Fig. 4. For actual baseline directions (let these directions be d and d 2 ), the following three azimuth angle pairs are possible:. (d, d 2 ) (see Fig. 4(a)), 2. (d + π, d 2 ) (see Fig. 4(b)), 3. (d, d 2 + π) (see Fig. 4(c)). 4. (d + π, d 2 + π) Theoretically, the pair (d +π, d 2 +π)isimpossible, except for the case when camera observes an object in (d +π) and camera 2 accidentally observes another similar object in (d 2 + π). Figure 4. Three different positions on the baseline.

6 24 Ishiguro, Sogo and Barth By iterating the estimation process, the above azimuth angle pairs (actually regarded as the same azimuth angle pair) are obtained with a high reliability with respect to the baseline between two cameras. Since the pair (d + π, d 2 + π) isobtained relatively fewer times than other three pairs, it can be distinguished from other three pairs by checking how often each pair is obtained. In addition, the opposite azimuth angle pair (d, d 2 ) indicates the actual direction from one camera to the other camera Increase Ratio of the Reliability The quality of the results depends on the increase ratio of the reliability (r inc : r dec ). For example, the method may detect many wrong baselines with a high ratio (r inc r dec ) since the reliability of the azimuth angle pairs quickly becomes greater than the threshold T reliable, while it cannot detect baselines with a low ratio (r inc r dec ) since the reliability remains smaller than T reliable.inthis subsection, we discuss how to determine a proper increase ratio for baseline estimation. Figure 5 shows the configuration of two cameras and 2, where θ represents the angular resolution of the cameras, and integer values k and k 2 are assigned to each direction (0 (k θ,k 2 θ) <π). δ and δ 2 (0 (δ,δ 2 ) <θ) represent the differential angle between the actual baseline direction and the direction of the reference axis of each camera (the zero azimuth). l is the baseline length and L is the observable range of the cameras (0 < l < 2L). N is the number of different directions, which is given by 2π/θ. Let us consider an azimuth angle pair (k, k 2 ) = (0, 0) that indicates the baseline directions. This pair is obtained when the object is located in one of the regions R (light gray regions) in Fig. 5. Since this pair is obtained many times by iterating observation, the reliability of the pair is increased by r inc in Step 3 (a) as described in Section 3.2. On the other hand, if the object is located in one of the regions S (dark gray regions) in Fig. 5 (i.e., near the baseline), one of the cameras observes it in the baseline direction, but the other camera does not. In this case, azimuth angle pairs (k, k 2 ) = (0, ) and (k, k 2 ) = (, 0) ( takes an arbitrary value except 0) are obtained, and the reliability of these pairs including the pair (k, k 2 ) = (0, 0) is decreased by r dec in Step 3(b) and Step 4. In the above process, the pair (k, k 2 ) = (0, 0) that indicates the correct baseline directions should remain with relatively high reliability compared to other pairs. Since the number of each pair obtained by iterating observation depends on the size of each region in Fig. 5, the increase ratio r inc and the decrease ratio r dec of the reliability must satisfy the following inequality: r inc R > r dec S (3) where R and S indicate the size of the regions R and S, respectively. Inequality (3) means that the reliability of the pair (k, k 2 ) = (0, 0) should increase in total. Then, another azimuth angle pair other than the baseline (e.g., (k, k 2 ) = (2, N/2 2)) is considered. This pair is obtained when an object is located in the region Figure 5. Object location that gives an azimuth angle pair (k, k 2 ) = (0, 0).

7 Baseline Detection and Localization for Invisible Omnidirectional Cameras 25 Figure 6. Object location that gives an azimuth angle pair (k, k 2 ) = (2, N/2 2). R (see Fig. 6), and its reliability is increased by r inc. On the other hand, if the object is located in one of the regions S, azimuth angle pairs (k, k 2 ) = (2, ) and (k, k 2 ) = (, N/2 2) ( takes an arbitrary value except N/2 2 and 2, respectively) are obtained, and the reliability of these pairs including the pair (k, k 2 ) = (2, N/2 2) is decreased by r dec. Since the pair (k, k 2 ) = (2, N/2 2) does not indicate the correct baseline direction, the reliability should be decreased after iteration of observation. Therefore, r inc and r dec must satisfy the following inequality: r inc R < r dec S (4) where R and S indicate the size of the regions R and S, respectively. Inequality (4) means that the reliability of the pair (k, k 2 ) = (2, N/2 2) should decrease in total. Note that the actual values of R and S depend on the location (i.e., the values of (k, k 2 )). Therefore, Inequality (4) must be satisfied at arbitrary location: ( r inc S ) < min (5) r dec k,k 2 R where k 0 and k 2 0. Consequently, r inc and r dec should satisfy: S R < r ( inc S ) < min (6) r dec k,k 2 R for arbitrary δ, δ 2, and l (0 < l < 2L). However, such r inc and r dec do not exist, since: When δ and δ 2 are close to 0 or θ (i.e., the difference between the reference axis and the actual baseline direction becomes large), R becomes small and S becomes large, and S/R (the left side of Inequality (6)) becomes large. When l is relatively small compared to L (i.e., two cameras are very close to each other), or when l is larger than L, min k,k 2 (S /R.) (the right side of Inequality (6)) becomes small. In order to determine proper r inc and r dec that satisfy Inequality (6), other values such as δ, δ 2, and l should be limited to a specific range. For example, the results of preliminary experimentation show that Inequality (6) is satisfied with r inc /r dec = 7.0 oncondition that 0.5L l.0l, θ = 2π/52, and the error margin of k and k 2 is ± Experimentation We have evaluated the proposed method in both of a simulated and real environment. Figure 7 shows the common configuration of four omnidirectional cameras in both environments. In these experiments, the following parameters are used: r inc = 7, r dec =, r initial = 7, T reliable = 250, T unreliable = 0 (See Section 3.2 for detailed meanings of the parameters). The cameras determine azimuth angles to

8 26 Ishiguro, Sogo and Barth Figure 7. Camera configuration (top view). objects with the resolution of 360/52 degrees (i.e., θ = 2π/52) Simulated Environment. In the simulated environment, the method iterates the following process: () randomly place several objects in the environment, (2) measure the azimuth angles to the objects within approximately 8m from each camera (i.e., L = 8.0. This is determined based on the experimentation in the real environment shown in the next paragraph), and (3) perform the estimation process described in Section 3.2. As shown in Fig. 8, when there is one object in the environment, the method has detected all of the six baselines after 00,000 observations. With three objects, it has also detected six baselines, however, two of them were next to the actual baselines, and two of the actual baselines have not been detected. In the case of five objects, the method has detected sixteen baselines, which include all of the six correct baselines, four baselines next to the actual ones, and six wrong baselines. It seems that the method have detected wrong baselines on account of many false matches among the projections of objects in the camera views. Figure 8. The number of detected baselines in the simulated environment Real Environment. We have also evaluated the method in the real environment, with the same camera configuration as the simulation (see Fig. 9). In this experimentation, the cameras detect objects (usually, walking people) by background subtraction, and measure the azimuth angles to the objects. Figure 0 shows four omnidirectional images taken with the cameras. Figure 9. Outdoor experimentation.

9 Baseline Detection and Localization for Invisible Omnidirectional Cameras 27 Figure 0. subtraction. Unwrapped omnidirectional image. Each of the white clusters indicates the azimuth angle to an object detected by background The graph in the bottom of each image shows the result of background subtraction based on intensity, where the horizontal center of each cluster is considered as the azimuth angle to the detected object. In the real environment, we should ignore stationary objects. If not, the same pair of azimuth angles is continuously obtained by observing a stationary object, and results in unexpected increase of the reliability of the pair in Step 3 (a) of the estimation process (see Section 3.2), even if the pair represents a wrong baseline direction. Therefore, in this experimentation, we have added the following step to the estimation process described in Section 3.2: (After Step ) With respect to a pair p i, j,mi,m j = (dm i i, dm j j ), if the projections of an object in the azimuth angles dm i i and dm j j do not move in the omnidirectional views of cameras i and j, respectively, the method considers p i, j,mi,m j as that of stationary object and ignores it in the process of Step 3. This step also eliminates observation errors when false objects are continuously detected in the same azimuth by background subtraction due to background noise. In this experimentation, the error margin used for comparing azimuth angles in Step 3 is (the unit is 360/52 degrees).

10 28 Ishiguro, Sogo and Barth provides angular information between cameras which is necessary for the camera localization. This section proposes how to effectively and precisely localize many cameras. The method proposed in Section 3 solves the identification problem between cameras. However, if we consider a general case, some of the cameras may observe each other. In such a case, we have to solve the identification problem of the camera projections to estimate the baselines. The method proposed here handles the general camera localization and identification. 4.. The Algorithm for Identification and Localization Figure. The number of detected baselines in the real environment. Figure 2. Detected baselines in the real environment (top view). The arrows indicate wrong directions. Figure shows the number of detected baselines in the real environment. After 250,000 observations (approximately 4.5 hours), the method has detected 0 baselines. Figure 2 shows the directions of the detected baselines overlaid on the actual camera positions. Two of them indicated with the arrows are wrong, however, the others indicate nearly correct directions. Note that 20 directions are shown in Fig. 2 based on the detected baselines, since each baseline is represented with a pair of two azimuth angles. Given N omnidirectional cameras located randomly within a region, the overall goal is to identify all of the cameras and to know the relative positions between them. Prior to describing the details of the algorithm, several assumptions must be stated:. Each camera has an omnidirectional vision sensor and can view other cameras. 2. All cameras have the same body that can readily be found in the environment; however, the cameras cannot be visually identified by each other. 3. Each camera cannot precisely measure the distance to other cameras (although rough distance measurements may be possible by viewing the camera image size); Each camera observes the other cameras and determines the azimuth angle to each relative to some base viewing angle. These data can be represented as: ( ) r d, d 2,...,d N ( ) r 2 d, d 2,...,d N2 (7) ( ) r N d, d 2,...,d NN where r i is the camera ID, and d n is the azimuth angle to the n-th camera (for N n observable cameras). From these data, the following angles between two observed cameras can be determined: 4. Solution for the Identification Problem Propagation of a Triangle Constraint Section 3 has proposed a solution for the invisible camera problem. Even if the camera is small, the method θ,2,θ,3,...,θ,n,θ 2,3,θ 2,4,...,θ 2,N,...,θ N,N θ 2,2,θ2,3,...,θ2,N 2,θ 2 2,3,θ2 2,4,...,θ2 2,N 2,...,θ 2 N 2,N 2 θ N,2,θN,3,...,θN,N N,θ N 2,3,θN 2,4,...,θN 2,N N,...,θ N N N,N N (8)

11 Baseline Detection and Localization for Invisible Omnidirectional Cameras 29 where the superscript represents the ID of the observing camera and the subscripts index the observed cameras. For each observing camera, the angles between all of the observed camera combinations are represented. Note that the algorithm does not assume that each camera can see an equal number of other cameras, therefore N i represents the total number of observed cameras for camera i. This angle representation can be simplified as follows: θ,θ 2,...,θ m,...,θ M Figure 4. Neighboring triangles. θ 2,θ2 2,...,θ2 m 2,...,θ 2 M 2 (9) θ N,θN 2,...,θN m N,...,θ N M N In this case, the single subscripts index the observed camera combinations. m i is the index and M j is the total number of observed camera combinations for observing camera j Triangle Constraint. At this point, we want to look at different combinations of these angles. One of the key constraints that is used in the algorithm is the fact that the relative angles between three cameras always add up to 80. Each camera represents a vertex in a triangle, and the angles between cameras must add up to 80 (see Fig. 3). We refer to this as the triangle constraint (Kato et al., 999). We consider different observed camera angles from combinations of three different observing cameras: θ i m i + θ j m j + θ k m k = 80 (0) Triangle Verification. The resulting triplets from the previous step may contain impossible triangles. These impossible triangles can be classified into four different types, as shown in Fig. 5. In order to eliminate these impossible triangle combinations, additional processing must be carried out on the triplets generated from the previous step. This processing involves evaluating neighboring triangle candidates (see Fig. 4) generated from the triangle constraint. The procedure is as follows:. Each triangle from the candidate list is selected. 2. For a particular triangle, each edge is examined. An edge of a triangle is represented by the two angles on each end (θ i m i,θ j m j ). All of the other candidate triangles are then examined to see if they contain the same edge. One candidate triangle is represented by (θ i m i,θ j m j,θ k m k ) another triangle, (θ i m i,θ j m j,θ l m l ) shares the edge (r i, r j ). 3. For all pairs of triangle candidates that share and edge, check to see if other triangle candidates exist For all combinations of three cameras (r i, r j, r k ), all observed camera combinations (indexed by m i = [,...,M i ], m j = [,...,M j ], and m k = [,...,M k ]) are checked. The resulting triplet combinations that satisfy the triangle constraint will allow us to compute the relative positions of the cameras. Figure 3. Triangle constraint. Figure 5. Impossible triangles.

12 220 Ishiguro, Sogo and Barth that contain the opposite edge, and one of the common vertices of the original triangle pair. In the example, the opposite edge would be (r k, r l ). Candidate triangles would be checked to see if they contain this edge and vertices r i or r j. If such triangles exist and all angles observed by all cameras are different from each other, the triangles (r i, r j, r k ), (r i, r j, r l ), (r j, r k, r l ), (r j, r k, r l ) () Figure 7. Iterative verification. are uniquely determined. When the triangles are uniquely determined, the projections (directions) of other cameras are identified between images taken by the cameras, and at the same time, the positions can be computed. That is, this method solves the identification problems for the projections, and then determines the locations. Further, the locations are precisely computed with sufficient information, since all of the cameras can observe each other Error Handling. Two major difficulties arise in actual situations:. Some of the cameras may have identical angles and corresponding combinations as shown in Fig. 6. In such cases, the triangle verification technique does not identify the camera projections; 2. Some of the angles belonging to an observing camera may have significant errors, and as a result the triangle constraint may not be met; 3. In a real environment, obstacles may exist which obstruct the camera views of each other. In order to handle these problems, the triangle constraint can be applied allowing for an error δ in the angle observations. If δ is set too small, the triangle constraint will not be met in many cases where there are valid triangles, and too few candidate triangles will be gener- Figure 6. Acase where the single triangle verification method. does not correctly identify the angles. ated. If δ is too large, then too many angles will meet the triangle constraint, generating too many candidate triangles. In this later case, it is possible to apply again the triangle verification technique described previously. In fact, for the case when there are many cameras, the triangle verification technique can be extended beyond simply finding a single opposite, verifying triangle that supports the hypothesis of the original triangle. For a large number of cameras, many verification triangles can be found. Figure 7 is an example of this. In this Figure, consider triangle (, 2, 3) as the reference triangle. With neighbor triangles (, 2, 4) and (, 3, 4), we can identify projections between cameras, 2, 3, and 4. With neighboring triangles (2, 3, 5) and (, 3, 5), we can identify projections between cameras, 2, 3, and 5. Further, by considering the verification triangle (2, 3, 4) as a new reference triangle, we can identify projections between cameras 2, 3, 4, and 6 with neighboring triangles (2, 3, 6) and (3, 4, 6). We apply this process to all triangle candidates acquired using the triangle constraint and sum up the number of verification triangles. In the example of Fig. 7, the total number of verification triangles for reference triangle (, 2, 3) is 2 3 = 6. The triangles that have the maximum number of verification triangles can be considered as the best solution. Even still, this best solution may not be unique since there could be several solutions that have an equal maximum number of verification triangles. In order to overcome this problem, positioning information can be used. Given a single solution, the relative positions of the sensors can be determined from a set of reference triangles. For each reference triangle, the position information can also be calculated from the associated verification triangles. With noisy observations, this position information will be slightly different than the reference-triangle-based positions. The camera positions are determined by a leastsquare method. First, we select two sensors as reference

13 Baseline Detection and Localization for Invisible Omnidirectional Cameras 22 cameras for determining a global coordinate system. Then, each camera position (X r, Y r )iscomputed in the global coordinate system as { n (X r, Y r ) = X i /n, i= } n Y i /n i= (2) where (X i, Y i )isacomputed position with a triangle, and n is the number of the triangles which have the same vertice. It is possible to estimate the error between these position estimates as follows: E = ( pri p r i ) 2 (3) where p(r i )isasensor position using projections from the reference triangle, and p (r i )isthe sensor position using projections from neighboring verification triangles. The solution that has the minimum positioning error E can be used as the best solution The Process and Computational Cost. The method proposed in this paper filters out possible solutions using the triangle verification technique and selects the solution that has the minimum positioning error. The process is summarized as follows: Step. List up all triplet combinations that satisfy the triangle constraint with an error δ in the angle observations (the unique solution is given in the case where all of the angles are different from each other); Step 2. Apply the triangle verification technique to all of the triplets and eliminate invalid triplets from the list (the unique solution is given in the case where combinations of angles for a camera are different from the angles observed by other cameras); Step 3. Estimate the positioning error for all of the remaining candidates and select the solution that has the minimum error. In the process, suppose a unique solution is acquired at the end of Step. The computational cost will be N C 3 = O(N 3 ) (4) If a unique solution is acquired at the end of Step 2, the maximum computation cost (in the case where all triplets remain from Step ) will be N C 3 (N 3)! > O(N 3 ) (5) Theoretically, the computational cost is high. However, Step typically filters out almost all of the candidates, and only a few good candidates are considered for which the triangle verification is successfully applied. Based on our preliminary experimentation (see next section), the computation for up to 0 cameras can be achieved in real time. If needed, parallel computation can be employed for increase performance. In order to perform the parallel computation, the cameras should be divided into local groups. The coarse range information given by the camera size projected on the omnidirectional images can be used for the localization. We consider the proposed method will be practical by using the local and parallel computation for a small number of cameras Experimental Results In order to verify our method, both simulation and realworld experiments have been carried out Simulation Experiments. A simulation program has been created that can randomly place omnidirectional sensors within a region. For each camera, the azimuth directions to the other sensors are determined. From these data, the angles between observed cameras (i.e., θ i m i ) are calculated. The algorithm described in the previous section is then performed. Figure 8 shows six simulation results for the case where all cameras can precisely observe all of the other cameras in the region. For each simulation, the camera locations have been randomly generated. The square dots show the ground truth camera locations and the circular dots show the reconstructed positions. They have completely overlapped. In another simulation case, observation errors are introduced. A mis-identified camera is added to the observation angles of the cameras. In this case, all of the cameras still observe each other, but they also observe an object that is mis-identified as a camera. As before, the observation angles are generated and processed. The results are shown in Fig. 9. The square dots show the ground truth camera positions and the circular dots show the reconstructed results. Again the camera positions are correctly reconstructed. In Fig. 9(a),

14 222 Ishiguro, Sogo and Barth Figure 8. Simulation results for verification of the algorithm. Figure 9. Simulation results in cases there are similar objects to the camera. there are six cameras and an object which seems like a camera. Figure 20 shows the performance of the proposed method. In the table, Camera, Triangles, Triangle constraints and propagation means the number of cameras, the number of possible triangles without identifying the cameras, the number of triangles filtered out with the triangle constraint and the number of triangles filtered out by propagating the triangle verification, respectively. The triangle constraint remains many triangle candidates, however the propagation of the triangle verification filters out almost all of wrong candidates. The remained candidates are finally filtered out by computing the locations as discussed in the previous section. In this simulation, we have used a standard personal computer which has Pentium Pro CPU and 56 Mbyte memory. The computational time is quite short for up to seven cameras. In the case the system consists of seven cameras or the cameras can be divided into small groups that consists of up to seven cameras, this algorithm solves the identification and localization problems in real time.

15 Baseline Detection and Localization for Invisible Omnidirectional Cameras 223 Figure 20. Performance of the method and its computational time Real-World Experiment. In addition to the simulation experiments, a real-world experiment was carried out using seven identical omnidirectional vision sensors. A picture of the sensors is shown in Fig. 2. These cameras were placed randomly on the floor of our laboratory in a region approximately 4 4 meters, as shown in Fig. 2. In addition to the cameras, a trashcan was placed among the cameras in order to occlude the views of some cameras. In this experiment, the ground truth positions of the cameras were carefully measured. Seven omnidirectional images were acquired from the cameras as shown in Fig. 23. In order to determine the azimuth angles to the observed cameras, the cameras must first be detected in the images. Because all of the omnidirectional cameras are set on the level floor and all are of equal height, the images of the observed cameras will fall within a very narrow circular band in the omnidirectional image, as shown in Fig. 23. Therefore, we can constrain our image processing to this narrow region. Within the circular band, we perform a simple region-based segmentation algorithm that uses connectivity analysis. The results of the segmentation are distinct blobs within the image region. Simple features of these blobs are used to detect the omnidirectional cameras. The cameras are dark compared to their background, and have a distinctive square shape. Once the omnidirectional cameras are detected within the image, the center of gravity of the camera blobs are used to determine the azimuth angle to each Figure 2. Seven omnidirectional cameras in a real environment.

16 224 Ishiguro, Sogo and Barth Figure 22. Image processing for detecting cameras in an omnidirectional view. Figure 23. Omnidirectional views taken by the cameras. observed camera. The omnidirectional images for all cameras with the processed viewing directions are shown in Fig. 23. The observed azimuth angles to the other identified cameras are given in Table. The angles between observed cameras are used as input to the positioning algorithm. In order to compare the reconstructed camera positions with the ground truth data, a common coordinate system must be used. The

17 Baseline Detection and Localization for Invisible Omnidirectional Cameras 225 Figure 24. Comparison of ground truth camera positions and algorithm results. results of the positioning algorithm return the relative camera positions up to a scale factor. When comparing to the ground truth positions, three items must be established: Table. Camera ID Observed azimuth angles for each observing platform. Directions to other cameras 0.83, 96.64, 22.80, , , , , 228.7, , 39.5, , 278.5, 32.00, , , 0.00, 5.5, 42.75, 48.58, , 80.00, , 33.66, , 38.63, 58.2, 69.02, 95.6 () A coordinate center origin (2) The coordinate system orientation (3) A scale factor In this experiment, camera #5 in Fig. 24 is selected as the coordinate system origin. The coordinate system orientation and scale were determined by having camera #6 lie on the x-axis, one unit length away from camera #5. With these definitions, the Cartesian coordinates of the cameras are given in Table 3. If the ground truth positions are scaled and oriented around the same coordinate system origin, it is possible to illustrate both the ground truth positions and algorithm positions together, as shown in Fig. 24. As can be seen, the positioning errors are approximately 0% or less. Table 2. Angles between cameras that are used in the algorithm. Table 3. Estimated X, Y coordinates values. Camera ID Observed angles Camera ID X Y 6.6, , , 23.50, 79.95, 05.67, , , 48.58, 5.5, , 46.34, 55.4, 64.26, , 63.05, 9.57, 6,

18 226 Ishiguro, Sogo and Barth 5. Conclusions In this paper, we have proposed two related methods for localizing cameras of a distributed omnidirectional vision system. The methods solve the invisible camera problem for detecting baselines when cameras do not observe projections of other cameras. Further, the methods used then localize the cameras by propagating triangles for solving the identification problem. With respect of the solution of the invisible camera problem, the increase ratio of the reliability should be properly determined in the baseline estimation method as described in Section 3. Although the discussion on this point is not yet sufficient, we have shown with experimentation that this method can detect the baselines among the sensors without the knowledge of the object correspondence. Further consideration of the increase ratio as well as the verification of the identification method remains as future work. With respect to the solution of the identification problem, the future work is to refine the algorithm and make it more efficient. The processing time increases exponentially as the number of cameras increases. Further, the algorithm currently operates on static snapshots of other cameras. We consider that the developed methods can be used as fundamental techniques for multiple camera systems that observes a wide area for monitoring and recognizing human activities and providing rich information through the computer network. References Adelson, E.H. and Bergen, E.H. 99. The plenoptic function and the elements of early vision. In Computation Models of Visual Processing, M.Landy and J.A. Movshon (Eds.), MIT Press. Aggarwal, J.K. and Cai, Q Human motion analysis: A review. In Proc. IEEE Nonrigid and Articulated Motion Workshop, pp Boyd, J. Hunter, E. Kelly, P. Tai, L. Phillips, C., and Jain, R MPI-video infrastructure for dynamic environments. In IEEE Int. Conf. Multimedia Systems. Collins, Lipton and Kanade, 999. A system for video surveillance and monitoring. In Proc. American Nuclear Society (ANS) Eighth International Topical Meeting on Robotics and Remote Systems. Pittsburgh, PA. Eyevision, html Hong, J. et al. 99. Image-based homing. In Proc. Int. Conf. Robotics and Automation. Ishiguro, H., Yamamoto, M., and Tsuji, S Omni-directional stereo. In IEEE Trans. PAMI, 4(2): pp Ishiguro, H Distributed vision system: A perceptual information infrastructure for robot navigation. In Proc. IJCAI, pp Ishiguro, H Development of low-cost compact omnidirectional vision sensors and their applications. In Proc. Int. Conf. Information Systems, Analysis and Synthesis, pp Ishiguro, H. and Nishimura, T VAMBAM: View and motion based aspect models for distributed omnidirectional vision systems. In Proc. Int. J. Conf. Artificial Intelligence, pp Ishiguro, H Development of low-cost compact omnidirectional vision sensors and their applications. In Proc. Int. Conf. Information Systems, Analysis and Synthesis, pp Jain R. and Wakimoto, K Multiple perspective interactive video. In Proc. Int. Conf. Multimedia Computing and System. Kato, K. Ishiguro, H., and Barth, M Identifying and localizing robots in a multi-robot system. In Proc. Int. Conf. Intelligent Robots and Systems, pp Medioni, G., Cohen, I., Bremond, F., and Nevatia, R Event detection and analysis from video streams. IEEE Trans. PAMI, 23(8): Nayar, S.K. and Baker, S Catadioptiric image formation. In Proc. Image Understanding Workshop. pp Rees, D.W Panoramic television viewing system. United States Patent, No. 3, 505, 465. Sarachik, K Characterizing an indoor environment with a mobile robot and uncaliblated stereo. In Proc. Int. Conf. Robotics and Automation, pp Torr, P.H.S. and Murray, D.W The development and comparison of robust methods for estimating the fundamental matrix. Int. J. Computer Vision, 24(3): Vsam, vsam/ Yagi, Y. and Kawato, S Panoramic scene analysis with conic projection. In Proc. IROS. Yamazawa, K., Yagi Y., and Yachida, M Omnidirectional imaging with hyperboloidal projection. In Proc. Int. Conf. Robots and Systems.

Omnidirectional image-based modeling: three approaches to approximated plenoptic representations

Omnidirectional image-based modeling: three approaches to approximated plenoptic representations Machine Vision and Applications (2003) 14: 94 102 Digital Object Identifier (DOI) 10.1007/s00138-002-0103-0 Machine Vision and Applications Springer-Verlag 2003 Omnidirectional image-based modeling: three

More information

Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications

Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications Development of Low-Cost Compact Omnidirectional Vision Sensors and their applications Hiroshi ISHIGURO Department of Electrical & Computer Engineering, University of California, San Diego (9500 Gilman

More information

Acquisition of Qualitative Spatial Representation by Visual Observation

Acquisition of Qualitative Spatial Representation by Visual Observation Acquisition of Qualitative Spatial Representation by Visual Observation Takushi Sogo Hiroshi Ishiguro Toru Ishida Department of Social Informatics, Kyoto University Kyoto 606-8501, Japan sogo@kuis.kyoto-u.ac.jp,

More information

Real-Time Target Localization and Tracking by N-Ocular Stereo

Real-Time Target Localization and Tracking by N-Ocular Stereo Real-Time Localization and Tracking by N-Ocular Stereo Takushi Sogo Department of Social Informatics Kyoto University Sakyo-ku, Kyoto 606 850, Japan Hiroshi Ishiguro Department of Computer and Communication

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

Image-Based Memory of Environment. homing uses a similar idea that the agent memorizes. [Hong 91]. However, the agent nds diculties in arranging its

Image-Based Memory of Environment. homing uses a similar idea that the agent memorizes. [Hong 91]. However, the agent nds diculties in arranging its Image-Based Memory of Environment Hiroshi ISHIGURO Department of Information Science Kyoto University Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp Saburo TSUJI Faculty of Systems Engineering

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate

More information

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi

More information

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems pp. 31-36, Maui, Hawaii, Oct./Nov. 2001. Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Hiroshi

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Person identification from spatio-temporal 3D gait

Person identification from spatio-temporal 3D gait 200 International Conference on Emerging Security Technologies Person identification from spatio-temporal 3D gait Yumi Iwashita Ryosuke Baba Koichi Ogawara Ryo Kurazume Information Science and Electrical

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

Expanding gait identification methods from straight to curved trajectories

Expanding gait identification methods from straight to curved trajectories Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

EE368 Project: Visual Code Marker Detection

EE368 Project: Visual Code Marker Detection EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Omni Stereo Vision of Cooperative Mobile Robots

Omni Stereo Vision of Cooperative Mobile Robots Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)

More information

Fast Lighting Independent Background Subtraction

Fast Lighting Independent Background Subtraction Fast Lighting Independent Background Subtraction Yuri Ivanov Aaron Bobick John Liu [yivanov bobick johnliu]@media.mit.edu MIT Media Laboratory February 2, 2001 Abstract This paper describes a new method

More information

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

An Integrated Surveillance System Human Tracking and View Synthesis using Multiple Omni-Directional Vision Sensors

An Integrated Surveillance System Human Tracking and View Synthesis using Multiple Omni-Directional Vision Sensors Image and Vision Computing Journal, June 2002. An Integrated Surveillance System Human Tracking and View Synthesis using Multiple Omni-Directional Vision Sensors Kim C. Ng, Hiroshi Ishiguro, Mohan Trivedi,

More information

Estimation of common groundplane based on co-motion statistics

Estimation of common groundplane based on co-motion statistics Estimation of common groundplane based on co-motion statistics Zoltan Szlavik, Laszlo Havasi 2, Tamas Sziranyi Analogical and Neural Computing Laboratory, Computer and Automation Research Institute of

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Camera Calibration for a Robust Omni-directional Photogrammetry System

Camera Calibration for a Robust Omni-directional Photogrammetry System Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,

More information

Panoramic Appearance Map (PAM) for Multi-Camera Based Person Re-identification

Panoramic Appearance Map (PAM) for Multi-Camera Based Person Re-identification Panoramic Appearance Map (PAM) for Multi-Camera Based Person Re-identification Tarak Gandhi and Mohan M. Trivedi Computer Vision and Robotics Research Laboratory University of California San Diego La Jolla,

More information

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Ta Yuan and Murali Subbarao tyuan@sbee.sunysb.edu and murali@sbee.sunysb.edu Department of

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Measurements using three-dimensional product imaging

Measurements using three-dimensional product imaging ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.

More information

Adaptive Panoramic Stereo Vision for Human Tracking and Localization with Cooperative Robots *

Adaptive Panoramic Stereo Vision for Human Tracking and Localization with Cooperative Robots * Adaptive Panoramic Stereo Vision for Human Tracking and Localization with Cooperative Robots * Zhigang Zhu Department of Computer Science, City College of New York, New York, NY 10031 zhu@cs.ccny.cuny.edu

More information

Incremental Observable-Area Modeling for Cooperative Tracking

Incremental Observable-Area Modeling for Cooperative Tracking Incremental Observable-Area Modeling for Cooperative Tracking Norimichi Ukita Takashi Matsuyama Department of Intelligence Science and Technology Graduate School of Informatics, Kyoto University Yoshidahonmachi,

More information

CS 4758: Automated Semantic Mapping of Environment

CS 4758: Automated Semantic Mapping of Environment CS 4758: Automated Semantic Mapping of Environment Dongsu Lee, ECE, M.Eng., dl624@cornell.edu Aperahama Parangi, CS, 2013, alp75@cornell.edu Abstract The purpose of this project is to program an Erratic

More information

Horus: Object Orientation and Id without Additional Markers

Horus: Object Orientation and Id without Additional Markers Computer Science Department of The University of Auckland CITR at Tamaki Campus (http://www.citr.auckland.ac.nz) CITR-TR-74 November 2000 Horus: Object Orientation and Id without Additional Markers Jacky

More information

Chapter 12 3D Localisation and High-Level Processing

Chapter 12 3D Localisation and High-Level Processing Chapter 12 3D Localisation and High-Level Processing This chapter describes how the results obtained from the moving object tracking phase are used for estimating the 3D location of objects, based on the

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

1998 IEEE International Conference on Intelligent Vehicles 213

1998 IEEE International Conference on Intelligent Vehicles 213 Navigation by Integrating Iconic and GPS Information Shigang Li and Akira Hayashi Faculty of Information Sciences Hiroshima City University Asaminami-ku, Hiroshima, 731-31, Japan li@im.hiroshima-cu.ac.jp

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

3D Corner Detection from Room Environment Using the Handy Video Camera

3D Corner Detection from Room Environment Using the Handy Video Camera 3D Corner Detection from Room Environment Using the Handy Video Camera Ryo HIROSE, Hideo SAITO and Masaaki MOCHIMARU : Graduated School of Science and Technology, Keio University, Japan {ryo, saito}@ozawa.ics.keio.ac.jp

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation 0 Robust and Accurate Detection of Object Orientation and ID without Color Segmentation Hironobu Fujiyoshi, Tomoyuki Nagahashi and Shoichi Shimizu Chubu University Japan Open Access Database www.i-techonline.com

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

On-Line Recognition of Mathematical Expressions Using Automatic Rewriting Method

On-Line Recognition of Mathematical Expressions Using Automatic Rewriting Method On-Line Recognition of Mathematical Expressions Using Automatic Rewriting Method T. Kanahori 1, K. Tabata 1, W. Cong 2, F.Tamari 2, and M. Suzuki 1 1 Graduate School of Mathematics, Kyushu University 36,

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification

A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification Huei-Yung Lin * and Juang-Yu Wei Department of Electrical Engineering National Chung Cheng University Chia-Yi

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Perceptual Grouping from Motion Cues Using Tensor Voting

Perceptual Grouping from Motion Cues Using Tensor Voting Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project

More information

ASIAGRAPH 2008 The Intermediate View Synthesis System For Soccer Broadcasts

ASIAGRAPH 2008 The Intermediate View Synthesis System For Soccer Broadcasts ASIAGRAPH 2008 The Intermediate View Synthesis System For Soccer Broadcasts Songkran Jarusirisawad, Kunihiko Hayashi, Hideo Saito (Keio Univ.), Naho Inamoto (SGI Japan Ltd.), Tetsuya Kawamoto (Chukyo Television

More information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,

More information

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

A Novel Smoke Detection Method Using Support Vector Machine

A Novel Smoke Detection Method Using Support Vector Machine A Novel Smoke Detection Method Using Support Vector Machine Hidenori Maruta Information Media Center Nagasaki University, Japan 1-14 Bunkyo-machi, Nagasaki-shi Nagasaki, Japan Email: hmaruta@nagasaki-u.ac.jp

More information

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Christian Weiss and Andreas Zell Universität Tübingen, Wilhelm-Schickard-Institut für Informatik,

More information

AS AUTOMAATIO- JA SYSTEEMITEKNIIKAN PROJEKTITYÖT CEILBOT FINAL REPORT

AS AUTOMAATIO- JA SYSTEEMITEKNIIKAN PROJEKTITYÖT CEILBOT FINAL REPORT AS-0.3200 AUTOMAATIO- JA SYSTEEMITEKNIIKAN PROJEKTITYÖT CEILBOT FINAL REPORT Jaakko Hirvelä GENERAL The goal of the Ceilbot-project is to design a fully autonomous service robot moving in a roof instead

More information

Toward Part-based Document Image Decoding

Toward Part-based Document Image Decoding 2012 10th IAPR International Workshop on Document Analysis Systems Toward Part-based Document Image Decoding Wang Song, Seiichi Uchida Kyushu University, Fukuoka, Japan wangsong@human.ait.kyushu-u.ac.jp,

More information

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Jean-François Lalonde, Srinivasa G. Narasimhan and Alexei A. Efros {jlalonde,srinivas,efros}@cs.cmu.edu CMU-RI-TR-8-32 July

More information

Revision of Inconsistent Orthographic Views

Revision of Inconsistent Orthographic Views Journal for Geometry and Graphics Volume 2 (1998), No. 1, 45 53 Revision of Inconsistent Orthographic Views Takashi Watanabe School of Informatics and Sciences, Nagoya University, Nagoya 464-8601, Japan

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

Pictures at an Exhibition

Pictures at an Exhibition Pictures at an Exhibition Han-I Su Department of Electrical Engineering Stanford University, CA, 94305 Abstract We employ an image identification algorithm for interactive museum guide with pictures taken

More information

Model Based Perspective Inversion

Model Based Perspective Inversion Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk

More information

Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera

Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera Ryosuke Kawanishi, Atsushi Yamashita and Toru Kaneko Abstract Map information is important

More information

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,

More information

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space Naoyuki ICHIMURA Electrotechnical Laboratory 1-1-4, Umezono, Tsukuba Ibaraki, 35-8568 Japan ichimura@etl.go.jp

More information

SYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET

SYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET SYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET Borut Batagelj, Peter Peer, Franc Solina University of Ljubljana Faculty of Computer and Information Science Computer Vision Laboratory Tržaška 25,

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

A 100Hz Real-time Sensing System of Textured Range Images

A 100Hz Real-time Sensing System of Textured Range Images A 100Hz Real-time Sensing System of Textured Range Images Hidetoshi Ishiyama Course of Precision Engineering School of Science and Engineering Chuo University 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551,

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

Omni-directional Multi-baseline Stereo without Similarity Measures

Omni-directional Multi-baseline Stereo without Similarity Measures Omni-directional Multi-baseline Stereo without Similarity Measures Tomokazu Sato and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science and Technology 8916-5 Takayama, Ikoma,

More information

Research Article Model Based Design of Video Tracking Based on MATLAB/Simulink and DSP

Research Article Model Based Design of Video Tracking Based on MATLAB/Simulink and DSP Research Journal of Applied Sciences, Engineering and Technology 7(18): 3894-3905, 2014 DOI:10.19026/rjaset.7.748 ISSN: 2040-7459; e-issn: 2040-746 2014 Maxwell Scientific Publication Corp. Submitted:

More information

A Miniature-Based Image Retrieval System

A Miniature-Based Image Retrieval System A Miniature-Based Image Retrieval System Md. Saiful Islam 1 and Md. Haider Ali 2 Institute of Information Technology 1, Dept. of Computer Science and Engineering 2, University of Dhaka 1, 2, Dhaka-1000,

More information

Robust Steganography Using Texture Synthesis

Robust Steganography Using Texture Synthesis Robust Steganography Using Texture Synthesis Zhenxing Qian 1, Hang Zhou 2, Weiming Zhang 2, Xinpeng Zhang 1 1. School of Communication and Information Engineering, Shanghai University, Shanghai, 200444,

More information

RECONSTRUCTION OF REGISTERED RANGE DATA USING GEODESIC DOME TYPE DATA STRUCTURE

RECONSTRUCTION OF REGISTERED RANGE DATA USING GEODESIC DOME TYPE DATA STRUCTURE RECONSTRUCTION OF REGISTERED RANGE DATA USING GEODESIC DOME TYPE DATA STRUCTURE Makoto Hirose and Kazuo Araki Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan hirose@grad.sccs.chukyo-u.ac.jp,

More information

arxiv: v1 [cs.cv] 2 May 2016

arxiv: v1 [cs.cv] 2 May 2016 16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer

More information

Binarization of Color Character Strings in Scene Images Using K-means Clustering and Support Vector Machines

Binarization of Color Character Strings in Scene Images Using K-means Clustering and Support Vector Machines 2011 International Conference on Document Analysis and Recognition Binarization of Color Character Strings in Scene Images Using K-means Clustering and Support Vector Machines Toru Wakahara Kohei Kita

More information

CONSTRUCTION OF THE VORONOI DIAGRAM BY A TEAM OF COOPERATIVE ROBOTS

CONSTRUCTION OF THE VORONOI DIAGRAM BY A TEAM OF COOPERATIVE ROBOTS CONSTRUCTION OF THE VORONOI DIAGRAM BY A TEAM OF COOPERATIVE ROBOTS Flavio S. Mendes, Júlio S. Aude, Paulo C. V. Pinto IM and NCE, Federal University of Rio de Janeiro P.O.Box 2324 - Rio de Janeiro - RJ

More information

A Two-stage Scheme for Dynamic Hand Gesture Recognition

A Two-stage Scheme for Dynamic Hand Gesture Recognition A Two-stage Scheme for Dynamic Hand Gesture Recognition James P. Mammen, Subhasis Chaudhuri and Tushar Agrawal (james,sc,tush)@ee.iitb.ac.in Department of Electrical Engg. Indian Institute of Technology,

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,

More information

3D Digitization of a Hand-held Object with a Wearable Vision Sensor

3D Digitization of a Hand-held Object with a Wearable Vision Sensor 3D Digitization of a Hand-held Object with a Wearable Vision Sensor Sotaro TSUKIZAWA, Kazuhiko SUMI, and Takashi MATSUYAMA tsucky@vision.kuee.kyoto-u.ac.jp sumi@vision.kuee.kyoto-u.ac.jp tm@i.kyoto-u.ac.jp

More information

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion Noriaki Mitsunaga and Minoru Asada Dept. of Adaptive Machine Systems, Osaka University,

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

Coarse-to-Fine Search Technique to Detect Circles in Images

Coarse-to-Fine Search Technique to Detect Circles in Images Int J Adv Manuf Technol (1999) 15:96 102 1999 Springer-Verlag London Limited Coarse-to-Fine Search Technique to Detect Circles in Images M. Atiquzzaman Department of Electrical and Computer Engineering,

More information

Optimizing Monocular Cues for Depth Estimation from Indoor Images

Optimizing Monocular Cues for Depth Estimation from Indoor Images Optimizing Monocular Cues for Depth Estimation from Indoor Images Aditya Venkatraman 1, Sheetal Mahadik 2 1, 2 Department of Electronics and Telecommunication, ST Francis Institute of Technology, Mumbai,

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information