Video-based system for satellite proximity operations

Size: px
Start display at page:

Download "Video-based system for satellite proximity operations"

Transcription

1 7th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2002' Video-based system for satellite proximity operations Piotr Jasiobedzki(1), Michael Greenspan(2), Gerhard Roth(3), HoKong Ng(1), Neil Witcomb(1) (1) MD Robotics, 9445 Airport Rd., Brampton ON, Canada, L6S 4J3 {pjasiobe, hng, (2) Dept. of Electrical and Computer Engineering, Queen s University, Kingston ON, Canada, K7L 3N6 michael.greenspan@ece.queensu.ca (3) National Research Council of Canada, Bldg. M-50, 1200 Montreal Road, Ottawa ON, Canada, K1A 0R6 gerhard.roth@nrc.ca 1 INTRODUCTION The servicing of satellites in space will extend their life and reduce operational costs. Servicing using an unmanned servicer spacecraft offers a reduction of launching cost when compared with that of manned spacecraft. However, the communication delay, intermittence and limited bandwidth between the ground and on-orbit segments renders direct teleoperated ground control infeasible. The servicer will therefore have to operate with a high degree of autonomy. The on-board sensor system should allow the servicer spacecraft to approach the target satellite, identify its unknown position and orientation from an arbitrary direction, and manoeuvre to align with a docking interface [5,7,10]. Autonomy during rendezvous and docking, proximity operations and actual servicing requires the ability to estimate and track the pose (position and orientation) of the serviced spacecraft, as well as that of its mechanical interfaces and serviceable modules. Automatic analysis of images from servicer cameras can provide such information. The vision systems used currently in space detect visual targets, which simplifies their design. Whereas this approach is attractive when viable, the use of targets introduces operational restrictions, due to the limited ranges of distances and angles for which the targets are visible and can be detected. A better solution would be to develop space vision systems that detect natural surface features and use known models of satellites. 2 SATELLITE PROXIMITY OPERATIONS Currently satellites are deployed, retrieved and serviced during Space Shuttle missions. For example, the Hubble Space Telescope has been serviced four times during its ten years on orbit. Spartan is a family of satellites that carry experimental payloads and Spartans have been flown multiple times on board the shuttle [16]. During satellite rendezvous and capture the shuttle crew uses camera images to locate the satellite at a distance of approximately 100m, to approach it, and finally to capture it with the Canadarm, the shuttle manipulator. Selected images used by the crew and the arm operator during one such mission are shown in Figure 1. The first two images have been captured using the shuttle bay cameras during the approach. Figure 1 Images observed during satellite capture. Left and center images have been captured using the shuttle bay cameras, the right one has been captured by the end-effector camera The second image shows the arm in hovering position just before the final capture phase. The last image has been captured from the end-effector camera after capturing of the interface. It is expected that unmanned satellite servicing will follow the same operational scenarios as does manned servicing. This will entail moving the servicer to the vicinity of the target satellite, preparing the arm for capture and moving the servicer into a position from which the arm can reach the target satellite interface, and finally capturing the satellite. Space vision systems will provide three-dimensional information about the position and orientation of observed spacecraft for such operations. The operating range of such a vision system may extend from approximately 100 m to 1

2 contact. It is expected that other systems relying on GPS, orbital mechanics, radar or Lidar measurements will be used to bring the servicer within this range of the target spacecraft. 2.1 Operational phases Different sensors and algorithms may be used for different phases of the satellite proximity operations, depending on the operational requirements and sensor characteristics (field of view, depth of field, accuracy, etc.). The operations are typically divided into phases according to distance into long, medium and short range. At the long range the vision system must determine the bearing and distance to the satellite of interest. At the medium range the distance, pose and motion must be determined accurately and with high confidence. Only when this information is known with certainty may the servicer spacecraft approach the target satellite (short range) and dock or capture it with a manipulator. The specific distances that correspond to the limits of the long, medium and short ranges depend on the spacecrafts involved (e.g., size, mass, geometry, and manoeuvrability) and safety requirements. Selection of the sensors and their location on the spacecraft depend on these factors and they may be changed for various missions depending on the requirements. For satellite proximity operations that may involve inspection, and docking by a microsatellite [10] or capture by a manipulator the ranges correspond approximately to long: 100m - 20m, medium: 20m - 2m and short: 2 m to contact. The short range subsystem will be used for well defined operations of docking or grasping and the expected range of distance and viewing angles will be restricted by the allowed approach trajectories to the target spacecraft. Lights mounted together with cameras may be used to illuminate the scene. The medium range poses a much more difficult challenge as the viewing angles are unrestricted, the distance varies significantly, and the vision system mostly relies on ambient illumination from the Sun or Earth albedo. For the purpose of our work we assume that the vision system is activated within the medium range and provided with an approximate attitude towards a satellite of interest. 2.2 Expected performance The processing rates are different for different operational phases. At the medium range the vision system will estimate pose rates of several Hz as this data will be used either to maintain the servicer at constant relative position with respect to the target or to bring the arm into a hovering position. The short range will be used for the visual servoing of the endeffector towards the mechanical interface and will have to operate at the rate of at least 10 Hz. This will allow the arm controller to compensate for the target drift or motion of the servicer. The expected accuracy of the vision system is related to the capture envelope of a mechanical interface. Depending on the interface this envelope may extend up to 5-25 cm and 2-10 degrees in angular misalignment. These requirements are mostly relevant for the short range system, which should provide measurements with a much higher accuracy. For the medium range system, an accuracy of 1% of the distance and several degrees of the object orientation will be sufficient. On-orbit illumination and the imaging properties of highly reflective materials used to cover satellites pose significant challenges for any vision system. The illumination changes from day to night as the spacecraft is orbiting the Earth, and for the Lower Earth Orbit the period is 90 minutes. The objects can be illuminated by a combination of direct sunlight (an intense directional source creating hard shadows), Earth albedo (an almost shadow-less source) and on-board lights. The insulation materials used in spacecraft manufacture are either highly reflective metallic foils or featureless white blankets that loosely cover the satellite body. This causes specular reflections and a lack of distinct and visually stable features such as lines or corners. 3 SPACE VISION TECHNOLOGIES Data processed by the vision systems may be obtained from a variety of sensors such as video cameras, and scanning and non-scanning rangefinders. The video cameras commonly used in current space operations are available at a relatively low cost, and have low mass and energy requirements. Their main disadvantage is their dependence on ambient illumination and sensitivity to direct sunlight. Scanning rangefinders [9] are independent of the ambient light and less sensitive to sunlight. This comes at a cost/weight/energy requirement of complex scanning mechanisms. Currently space vision systems rely on the presence of visual targets for their operations. This simplifies the image processing tasks as algorithms tuned to specific targets are used. The targets can be used only in specific tasks (limited by the distance and viewing angles) requiring the definition of all operations of interest at the design phase and precluding any unexpected tasks. Lack or mis-detection of a target may lead to a vision system failure, as there will be not enough data to compute the reliable camera pose solution. There are additional installation and maintenance costs. In general, the targets are suitable only for predefined operations such as vision guided capture at close range range. For a recent review of space vision technologies see [5,7]. 2

3 4 MDR VISION SYSTEM The MDR vision system prototype supports the medium and short range operations, i.e., approximately 20m to 0.2m (the minimum distance corresponds to a stand-off between camera and satellite surface in contact position). This large range of distances implies using different algorithms for different operational phases. The vision system uses passive video cameras that operate under ambient illumination and artificial illumination. The algorithms implemented in this system rely mostly on the presence of natural features and object models. Extraction and processing of redundant data provides robustness to partial data loss. To reduce the mass and power requirements the system uses the same sensing and computing hardware during different phases reconfiguring the system as needed. The current vision system prototype operates at two ranges (medium and short) in four different configurations, see Figure 2. Multiple configurations may run in parallel, use the same cameras and share partially processed data. At the medium range the vision system cameras observe either the complete satellite or a significant portion of it. The system computes sparse 3D data using only natural features observed in stereo images. Three medium range functions are currently supported: 1) Model-free motion estimation, 2) Model-based pose acquisition, 3) Model-based pose tracking. Configuration control User interfaces, data log Vision server id, 3D pose, 3D motion, confidence Spacecraft or Robot Controller monitoring 3D location, 3D motion 3D model (satellite) 3D data sparse 3D computation acquisition id, 3D pose stereo cameras tracking 3D pose, 3D motion target detection target pose estimation & tracking 3D pose, 3D motion 3D model (target) 4.1 3D data computation Figure 2 MDR Vision system architecture A sparse three dimensional representation of the scene is computed from images provided by two calibrated and synchronised stereo cameras. The images are rectified first, then edges extracted in both images independently are matched along the epipolar lines and used produce a sparse representation of the scene. The stereo processing parameters are changed adaptively so as to extract about D points from each stereo pair [8]. 4.2 Model-free motion estimation Model-free motion estimation processes sequences of stereo images and estimates motion of the observed satellite using structure from motion algorithms and sparse 3D data. This motion estimation is used to determine that it is safe to approach the observed satellite and to produce reliable 3D data that is used by pose acquisition module. When dealing with an isolated object, it is the case that the change in camera position is simply the opposite of the change in the object position. Using this principle we compute the satellite motion by simply finding the relative camera motion. Once the satellite reaches the medium range we achieve reasonably accurate depth from the stereo processing. In this situation we combine the stereo and structure from motion (SFM) to compute the camera position. At each position of the stereo cameras we get a set of sparse 2D points. Now using only the right images of the stereo sequence we match the 2D pixel locations of these 2D feature points. More precisely, assume we have two stereo camera positions: left1, right1 and left2, right2. We take the 2D pixel locations of the matching stereo features in right1 and attempt to match them to the stereo features in right2. We are basically performing SFM across the image sequence [14], but using only the 2D stereo features as SFM features. This type of processing is not the same as traditional tracking because we use 3

4 rigidity to prune false matches, which is a very strong constraint. We then use the 2D co-ordinates of these matched features to compute the transformation between the right images of the stereo cameras in the sequence. This method has a number of advantages. First, it avoids the problem of motion degeneracy that is inherent in SFM. Degeneracy occurs when the motion is pure rotation, or when the translational component of the motion is very small. Second, since the stereo rig is calibrated we know the true scale of the computed camera motion. This is not the case when we are using SFM, since without extra information about the scene geometry we cannot know the true scale of the reconstruction of camera motion. Third, the results for small camera motions are no less accurate than the accuracy of the stereo data. When using SFM with small camera motions the reconstruction of the camera path is rarely accurate. However, with our approach the accuracy does not depend on the SFM accuracy, but on the stereo accuracy. We have implemented the second approach and show the results for a sequence in which the camera is tracking a grapple fixture. The final result of this process is the camera path (or satellite motion in the camera reference frame), along with the accumulated set of 3D feature points of the observed satellite, see Figure 3. Figure 3 First and last image of a sequence (left), and computed camera path and Points (right) 4.3 Model based pose acquisition The pose acquisition module uses 3D data produced by the model-free estimation process along with the model of the observed satellite to estimate its pose without using any prior information or constraints. The main idea is to apply 3D binary template matching to the data [17]. The grid elements of a 3D template are called voxels and each such element in a voxel grid represents a small distinct region of space and can be assigned a binary value to indicate whether that region is empty or occupied. The resolution of the voxel grid is quite coarse, a benefit of which is to reduce the size of the pose search space by masking high frequency details. As the satellite can be positioned arbitrarily, a pure template matching approach would require a separate template for each possible pose. The approach described here is to exploit the structure of the object to resolve a subset of the pose parameters, and to apply 3D binary template matching over the reduced space of the remaining free pose parameters. The objective is to determine a rigid transformation X, generally comprising 3 rotations and 3 translations, which aligns a surface model M with the sensed data P of the satellite. The satellite can be positioned arbitrarily within the sensorcentric coordinate system U, and the sensed data is a sparse cloud of 3D points that sample its surface at arbitrary locations. The solution has two distinct phases, each of which solves a distinct subset of the six free degrees-of-freedom (dofs) of the pose. The first phase is motivated by the observation that many satellites, such as the Radarsat, have an elongated structure. We assume that the sensed points are distributed evenly enough along the surface of the satellite to capture this elongation. The vector v that describes the major axis can be efficiently and fairly robustly determined by Principle Component Analysis (PCA). All points in P are first translated by T so that the centroid of the transformed point set Q falls at the origin. Next the covariance matrix C is generated for Q as where the summations are taken over the entire point set. The vector v that points in the direction of the major axis of elongation of Q (and thus P) is determined as the eigenvector corresponding to the maximum eigenvalue of C. The final step of phase one is to identify the two rotations that align v with U x. These two rotations are combined into the transform R yz and applied to the translated point set Q. Examples of the results are illustrated in Figure 4(b). The PCA method has been demonstrated to have a degree of robustness, so that a small number of spurious outliers that may result from background scene elements in the 4

5 experimental system do not seriously skew the result. (In a space borne system the background can be easily filtered, and so significant outliers are not anticipated.) It is also not necessary to sense over the complete structure of the satellite to correctly identify the major axis, as the elongation of the satellite is evident in local regions. Following this first phase, there still exist 4 unresolved dofs: the rotation around U x and all three translations. These are determined in the second phase using a correlation approach, which is essentially an exhaustive 3D template matching over the remaining 4 dofs. Let the function V(M) denote the quantization of surface model M onto a voxel grid, such that all voxels that intersect with the surface of M will have the occupied value, and all other voxels will have the empty value. We define the voxel occupancy as the set of occupied voxels of V(XM) for a given pose X of model M. Let M be initially positioned in a canonical pose within U such that its major axis is aligned with U x, its minor axis is aligned with U y, and its centroid is located at the origin. In preprocessing, M is rotated around U x to n distinct pose values. The voxel occupancy of each rotation is calculated and stored in association with its rotation value. The voxel occupancies are therefore essentially 3D binary templates. At runtime, phase one is first executed so that the point set is centered at the origin and its major axis is aligned with U x. The point set is next quantized into the otherwise empty voxel grid, creating a binary 3-D image of the sensed data. Each of the voxel templates generated during preprocessing are then correlated with the voxel grid, at all possible locations. The template location with the highest correlation value indicates the rotation around U x, and the 3 translation values of the satellite pose, within voxel accuracy. This method has been demonstrated to be efficient, executing in a few seconds, and reliable, succeeding in over 90% of the trials in the full-view images. One limitation is that it applies only to elongated objects. For those satellites that are not elongated, it may be possible to apply a strategy that considers other global properties, such as symmetries, to satisfy the axis alignment of phase one. It may also be efficient to apply the template matching approach of phase two to resolve 5 free dofs, if it were only possible to resolve one dof in phase one. 4.4 Model based pose tracking Figure 4 Pose acquisition example After the initial pose of the satellite is determined, the model based tracking mode is invoked and initialized with this estimate. Tracking operates with high precision and update rate by matching the satellite model with 3D data. Satellite servicing operations that do not require contact, such as fly-around, homing and imaging, are performed in this mode using visual servoing of the arm and/or spacecraft. During tracking the pose is determined by iteratively matching 3D data with the model using a version of the Iterative Closest Point algorithm (ICP) [2]. Our implementation of this algorithm consists of the following steps: 1. Selection of the closest points between the data set and the model in the expected pose 2. Rejection of outliers 3. Computation of geometrical registration between matched data points and the model 4. Application of geometrical registration to the data 5. Termination of the iterations after stopping criterion is reached The algorithm iterates the steps 1-4 until the stopping criterion (convergence and/or reaching the maximum allowed number of iterations) is met. The number of iterations required to reach the minimum depends on the difference between the expected and the actual pose. The expected pose is predicted using a pose tracker that estimates both linear and angular velocities, and applies the correction to the model pose. This predictor significantly reduces the number of iterations and thus reduces the processing time. 5

6 Detection of the closest points between the model and the data sets is the most computationally intensive tasks as it involves computing distances for each combination of data points and model features. The complexity is O(N M), where N is the number of data points, M number of model points, and different model representations and acceleration schemes have been used to reduce it. Grouping model points into simple shapes allows using closed form solutions for computing distances [8]. K-d trees have been used to reduce the number of distance computations for models represented as points or meshes. Our technique eliminates all the distance computations from the on-line phase by computing them in advance off-line. The data is stored in an octree type of a structure that allows efficient storage and fast access. This however comes at a price of having to store the data and/or reduce the resolution of the distances. The model based pose tracking does not require detecting high level features in the scene and matching them with the model features, and re-establishes correspondences between the model and data points at every iteration. This significantly reduces its sensitivity to partial shadows, occlusion and local loss of data caused by reflections and image saturation. An adaptable gating mechanism eliminates outliers that are not part of the model. 4.5 Visual target based pose acquisition and tracking Computed pose of a docking or robotic interface on the target spacecraft is used to servo the end-effector to the capture position. This computation must be performed with high accuracy, reliability and update rate to ensure success and safety. The end-effector must approach the interface along predefined trajectories and may not enter stay-out zones. These constraints on viewing directions and distances allow us to use a visual target mounted next to the interface for this operation. At the close range the observed target satellite surface cannot be simultaneously viewed by both cameras and the vision system processes monocular images. The short range module may use various target designs consisting of linear and circular features arranged in planar or 3D formations. Example targets are shown in Figure 1 and Figure 6. Three dimensional arrangement of features provides higher accuracy with a slightly more complex design. In experiments described in this paper we have used a planar target. The short range processing involves detection of features, pose estimation, and pose tracking and prediction. Each feature is detected by processing an image window centered around its predicted location. The processing involves adaptive segmentation, circle detection, and centroid estimation. Pose estimation maps the centroids to the target model and uses an appropriate closed-form pose estimation algorithm [1,3,6] to handle different number of detected target points and their formations. Each algorithm requires at least four target points to compute the pose. In the case where more than four visual targets are detected, sampling techniques based on Ransac [4] and LMedS [15] are used to reject outliers and select the group of points that gives the best pose information. The system automatically handles occasional loss of features due to adverse illumination or occlusion. Pose tracking estimates linear and angular velocities of the target satellite. This motion model is used to predict pose of the satellite in subsequent observations and reduces the size of search windows used in feature detection. 4.6 Vision system reconfiguration The vision system is re-configured during its operation depending on the requirements of the specific mission, the progress of the operation, and the relative distance and orientation between the satellites. The commands may be sent by a vision system or robotic controller, or by a Guidance, Navigation and Control unit. A diagram illustrating the vision system reconfiguration during the satellite capture is shown in Figure 5. The Vision System controller initializes the medium range configuration upon reaching a distance MR max (the maximum distance where stereo produces 3D data for pose estimation). A short sequence of stereo images is processed using the model-free motion estimation module in order to estimate the satellite motion and compute reliable 3D data. If the detected velocities are below the allowed values the vision system computes the initial pose. This process is repeated until a pose with a sufficiently high confidence value is obtained. Once this is achieved the pose is passed to the medium range tracking module, which starts tracking using this estimate. As the stereo cameras approach the satellite the distance and relative orientation is continuously sent to the controller, and when they reach the values (SR max ) when the short range can be used, this module is then invoked. The short range module acquires the pose of the observed target first and if the confidence is high enough it transitions to a target tracking mode. Both medium and short range modules are executed in parallel until their estimates agree and the distance to the satellite reaches the minimum operational distance for the medium range MR min. The short range operates until the final contact and rigidisation of the end-effector at the distance of SR min. 6

7 Start Stop VS Controller Start Pose Start Pose Stop Medium Range Acq. Tracking Start Stop Short Range Acq. Tracking Far distance M-R max S-R max M-R min S-R min Close distance Figure 5 Vision system reconfiguration during the satellite approach and capture The existing overlap between the operational ranges of different configurations ensures a smooth and correct handover. This provides a safety check, allows for multiple attempts at pose acquisition and does not interrupt the vision based measurements at any time. There is an additional computational cost of running multiple configurations in parallel but reusing partial results produced by common modules reduces this effect. A similar reconfiguration sequence is used during departure from the satellite. A built in hysteresis eliminates unnecessary system reconfigurations if, for example, there is a drift between the satellite and the servicer. 5 EXPERIMENTS AND PERFORMANCE CHARACTERISATION A prototype of the vision system has been tested in a representative laboratory environment under simulated capture of a free-floating satellite. The laboratory setup includes two industrial robots with 6 degrees of freedom each. One of the robots holds a scaled replica of a satellite and the other one an end-effector with cameras. The robots are fully calibrated and they can be programmed to follow predefined trajectories allowing testing of the vision system performance. They can also operate under closed loop control using the vision system tracking modes. The workspaces of both robots overlap and the maximum separation between the replica and the cameras is 6 m. Tests at longer distances are possible, however, without the ability to perform a continuous motion to contact. Testing is performed under illumination simulating direct Sun light and Earth albedo. The Satellite models used in the tests have been manufactured using actual space surface materials to create realistic imaging effects. In the conducted experiments we used two stereo cameras mounted on the end-effector. Cameras were rigidly coupled together and equipped with fixed zoom and focus lenses. Custom developed algorithms for controlling exposure were used to compensate for changes in brightness of the scene. This optical arrangement has proven to be sufficient for the distances from 0.2 m to 5 m. The first set of experiments has been designed to test the performance of the vision system in different operational scenarios. The robots were commanded to follow predefined trajectories and the vision system processed images. Comparison between the recorded robot telemetry and vision system estimates was used to assess the vision system accuracy and robustness. In the second set of experiments the vision system was tested during visual servoing experiments when the computed pose was used to drive a robotic arm to capture a free-floating satellite [11]. Figure 6 shows three images obtained during one of the visual servoing experiments (start position, intemediate and contact). Figure 6 Images from a sequence recorded during experiments (the first one at 5 m, last at 0.2 m) 7

8 The vision system processing is fully implemented in software and runs on a dual processor Pentium III, 700 MHz computer. Initial estimation of the satellite motion and pose acquisition requires several seconds depending on the length of the processed sequence. The pose acquisition takes several seconds depending on the number of 3D points used (1,000-4,000). The model based pose tracking operates at the rate of 2 Hz including computation of over a 1000 of 3D points from stereo images. Tracking of visual targets operates at rates of over 10 Hz depending on the number of features tracked simultaneously (typically 5-10). Accuracy of pose estimates depends on the distance, object size, camera field of view and stereo baseline. At the medium range the accuracy is in order of cm and a few degrees. Before contact the accuracy is in order of a fraction of a degree and 1 mm, which exceeds by an order of magnitude the capture envelope of the end-effector used in experiments. The vision system is robust to partial data loss caused by shadows, specular reflections, occlusion, and non-uniform illumination. 6 CONCLUSIONS We have described a prototype vision system for satellite proximity operations and the results of laboratory experiments conducted using this system under representative conditions. The system relies on satellite models and natural surface features on the satellite for most of its operation. When the observed object is within the stereo range then the system computes 3D data and uses it for pose estimation. At close ranges the vision system operates in a monocular mode. The vision system has been fully implemented and characterised in laboratory experiments under representative conditions and at the distances from 0.2m to 5m. The minimum distance corresponds to an offset between the camera and target satellite surface in the captured position. The depth of field of the camera optics used limited the maximum distance to approximately 5m. Extending the maximum distance to 20m, the maximum range when accurate pose estimation is required, and beyond might require using an additional pair of stereo cameras mounted on the end-effector or on the servicer spacecraft. The field of view, stereo baseline and setting will have to be optimised for operational range. Our vision system could easily process images from such cameras. Selected components of the described vision system are being adapted for use in the Orbital Express Space Demonstration [13], which will be launched in New research and development focuses on integration of the long range subsystem, integration of the system with robotic and spacecraft controllers, and testing in different operational scenarios. 7 REFERENCES 1. M.A. Abidi, T. Chandra. "Pose estimation for camera calibration and landmark tracking",icra, 1990, pp Besl, P. & N. McKay, "A Method for Registration of 3-D Shapes", IEEE PAMI, vol. 14, no. 2, 1992, pp D. DeMenthon and L. S. Davis. "Model-Based Object Pose in 25 Lines of Code", ECCV, pp , M. A. Fischler and R. C. Bolles. "Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography", Commun. Assoc. Comp. Mach., 24:381-95, Hollander S.,"Autonomous Space Robotics: Enabling Technologies for Advanced Space Platforms", Space 2000 Conference, AIAA Y. Hung, P.S. Yeh, and D. Harwood. "Passive ranging to known planar points sets", ICRA 1985, pp Jasiobedzki P, Anders C: "Computer Vision for Space Applications: Applications, Role and Performance". Space Technology v.2005, Jasiobedzki P, Greenspan M, Roth G.: "Pose Determination and Tracking for Autonomous Satellite Capture", I- SAIRAS Denis G.; Beraldin, J.-Angelo; Blais, Francois; Rioux, Marc; Cournoyer, Luc. "A Three-Dimensional Tracking and Imaging Laser Scanner for Space Operations", SPIE Vol. 3707, pp Ledebuhr A, Kordas J, Ng L, Jones M, et al. "Autonomous, Agile, Mirco-Satellites and Supporting Technologies for Use in LEO Missions", 12 th AIAA/USU Conf. On Small Satellites. 11. Liu M., Jasiobedzki P.:"Behaviour based visual servo controller for satellite capture", ASTRA Oda M, Kine K, Yamagata F., "ETS-VII a Rendezvous Docking and Space Robot Technology Experiment Satellite",46 Int. Astronautical Congress, Oct 1995, Norway, pp Orbital Express G. Roth, A. Whitehead, Using projective vision to find camera positions in an image sequence, Vision Interface 2000, pp , Montreal Canada, May Rousseeuw, A. Leroy, "Robust regression and outlier detection", John Wiley and Sons, New York, Spartan Greenspan, M.A. and Jasiobedzki, P., Pose Determination of a Free-Flying Satellite, MTOR02: Motion Tracking and Object Recognition, Las Vegas, USA, June 24-27,

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains PhD student: Jeff DELAUNE ONERA Director: Guy LE BESNERAIS ONERA Advisors: Jean-Loup FARGES Clément BOURDARIAS

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,

More information

Simultaneous Localization and Mapping (SLAM)

Simultaneous Localization and Mapping (SLAM) Simultaneous Localization and Mapping (SLAM) RSS Lecture 16 April 8, 2013 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 SLAM Problem Statement Inputs: No external coordinate reference Time series of

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016

Sensor Fusion: Potential, Challenges and Applications. Presented by KVH Industries and Geodetics, Inc. December 2016 Sensor Fusion: Potential, Challenges and Applications Presented by KVH Industries and Geodetics, Inc. December 2016 1 KVH Industries Overview Innovative technology company 600 employees worldwide Focused

More information

On-Orbit Testing of Target-less TriDAR 3D Rendezvous and Docking Sensor

On-Orbit Testing of Target-less TriDAR 3D Rendezvous and Docking Sensor On-Orbit Testing of Target-less TriDAR 3D Rendezvous and Docking Sensor Stephane Ruel, Tim Luu, Andrew Berube* *Neptec Design Group, Canada e-mail: sruel@neptec.com Abstract TriDAR is a vision system developed

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Simultaneous Localization

Simultaneous Localization Simultaneous Localization and Mapping (SLAM) RSS Technical Lecture 16 April 9, 2012 Prof. Teller Text: Siegwart and Nourbakhsh S. 5.8 Navigation Overview Where am I? Where am I going? Localization Assumed

More information

ACE Project Report. December 10, Reid Simmons, Sanjiv Singh Robotics Institute Carnegie Mellon University

ACE Project Report. December 10, Reid Simmons, Sanjiv Singh Robotics Institute Carnegie Mellon University ACE Project Report December 10, 2007 Reid Simmons, Sanjiv Singh Robotics Institute Carnegie Mellon University 1. Introduction This report covers the period from September 20, 2007 through December 10,

More information

Laser Eye a new 3D sensor for active vision

Laser Eye a new 3D sensor for active vision Laser Eye a new 3D sensor for active vision Piotr Jasiobedzki1, Michael Jenkin2, Evangelos Milios2' Brian Down1, John Tsotsos1, Todd Campbell3 1 Dept. of Computer Science, University of Toronto Toronto,

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Surface Registration. Gianpaolo Palma

Surface Registration. Gianpaolo Palma Surface Registration Gianpaolo Palma The problem 3D scanning generates multiple range images Each contain 3D points for different parts of the model in the local coordinates of the scanner Find a rigid

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

UAV Autonomous Navigation in a GPS-limited Urban Environment

UAV Autonomous Navigation in a GPS-limited Urban Environment UAV Autonomous Navigation in a GPS-limited Urban Environment Yoko Watanabe DCSD/CDIN JSO-Aerial Robotics 2014/10/02-03 Introduction 2 Global objective Development of a UAV onboard system to maintain flight

More information

Smartphone Video Guidance Sensor for Small Satellites

Smartphone Video Guidance Sensor for Small Satellites SSC13-I-7 Smartphone Video Guidance Sensor for Small Satellites Christopher Becker, Richard Howard, John Rakoczy NASA Marshall Space Flight Center Mail Stop EV42, Huntsville, AL 35812; 256-544-0114 christophermbecker@nasagov

More information

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner F. B. Djupkep Dizeu, S. Hesabi, D. Laurendeau, A. Bendada Computer Vision and

More information

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS M. Hassanein a, *, A. Moussa a,b, N. El-Sheimy a a Department of Geomatics Engineering, University of Calgary, Calgary, Alberta, Canada

More information

LOAM: LiDAR Odometry and Mapping in Real Time

LOAM: LiDAR Odometry and Mapping in Real Time LOAM: LiDAR Odometry and Mapping in Real Time Aayush Dwivedi (14006), Akshay Sharma (14062), Mandeep Singh (14363) Indian Institute of Technology Kanpur 1 Abstract This project deals with online simultaneous

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras

Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras Dipl. Inf. Sandro Esquivel Prof. Dr.-Ing. Reinhard Koch Multimedia Information Processing Christian-Albrechts-University of Kiel

More information

Fitting (LMedS, RANSAC)

Fitting (LMedS, RANSAC) Fitting (LMedS, RANSAC) Thursday, 23/03/2017 Antonis Argyros e-mail: argyros@csd.uoc.gr LMedS and RANSAC What if we have very many outliers? 2 1 Least Median of Squares ri : Residuals Least Squares n 2

More information

3D Photography: Active Ranging, Structured Light, ICP

3D Photography: Active Ranging, Structured Light, ICP 3D Photography: Active Ranging, Structured Light, ICP Kalin Kolev, Marc Pollefeys Spring 2013 http://cvg.ethz.ch/teaching/2013spring/3dphoto/ Schedule (tentative) Feb 18 Feb 25 Mar 4 Mar 11 Mar 18 Mar

More information

Visual Pose Estimation System for Autonomous Rendezvous of Spacecraft

Visual Pose Estimation System for Autonomous Rendezvous of Spacecraft Visual Pose Estimation System for Autonomous Rendezvous of Spacecraft Mark A. Post1, Junquan Li2, and Craig Clark2 Space Mechatronic Systems Technology Laboratory Dept. of Design, Manufacture & Engineering

More information

Robot Vision without Calibration

Robot Vision without Calibration XIV Imeko World Congress. Tampere, 6/97 Robot Vision without Calibration Volker Graefe Institute of Measurement Science Universität der Bw München 85577 Neubiberg, Germany Phone: +49 89 6004-3590, -3587;

More information

On Satellite Vision-aided Robotics Experiment

On Satellite Vision-aided Robotics Experiment On Satellite Vision-aided Robotics Experiment Maarten Vergauwen, Marc Pollefeys, Tinne Tuytelaars and Luc Van Gool ESAT PSI, K.U.Leuven, Kard. Mercierlaan 94, B-3001 Heverlee, Belgium Phone: +32-16-32.10.64,

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 54 A Reactive Bearing Angle Only Obstacle Avoidance Technique for

More information

A 3-D Scanner Capturing Range and Color for the Robotics Applications

A 3-D Scanner Capturing Range and Color for the Robotics Applications J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,

More information

Structured light 3D reconstruction

Structured light 3D reconstruction Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

Reconstruction of complete 3D object model from multi-view range images.

Reconstruction of complete 3D object model from multi-view range images. Header for SPIE use Reconstruction of complete 3D object model from multi-view range images. Yi-Ping Hung *, Chu-Song Chen, Ing-Bor Hsieh, Chiou-Shann Fuh Institute of Information Science, Academia Sinica,

More information

Octree-Based Obstacle Representation and Registration for Real-Time

Octree-Based Obstacle Representation and Registration for Real-Time Octree-Based Obstacle Representation and Registration for Real-Time Jaewoong Kim, Daesik Kim, Junghyun Seo, Sukhan Lee and Yeonchool Park* Intelligent System Research Center (ISRC) & Nano and Intelligent

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing

More information

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Dealing with Scale Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why care about size? The IMU as scale provider: The

More information

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Wei Guan Suya You Guan Pang Computer Science Department University of Southern California, Los Angeles, USA Abstract In this paper, we present

More information

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor Takafumi Taketomi, Tomokazu Sato, and Naokazu Yokoya Graduate School of Information

More information

H2020 Space Robotic SRC- OG4

H2020 Space Robotic SRC- OG4 H2020 Space Robotic SRC- OG4 CCT/COMET ORB Workshop on Space Rendezvous 05/12/2017 «Smart Sensors for Smart Missions» Contacts: Sabrina Andiappane, sabrina.andiappane@thalesaleniaspace.com Vincent Dubanchet,

More information

Navigation for Future Space Exploration Missions Based on Imaging LiDAR Technologies. Alexandre Pollini Amsterdam,

Navigation for Future Space Exploration Missions Based on Imaging LiDAR Technologies. Alexandre Pollini Amsterdam, Navigation for Future Space Exploration Missions Based on Imaging LiDAR Technologies Alexandre Pollini Amsterdam, 12.11.2013 Presentation outline The needs: missions scenario Current benchmark in space

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Overview Problem statement: Given a scan and a map, or a scan and a scan,

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Geometric Rectification of Remote Sensing Images

Geometric Rectification of Remote Sensing Images Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Robotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors

Robotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 13, No. 3, pp. 387-393 MARCH 2012 / 387 DOI: 10.1007/s12541-012-0049-8 Robotic Grasping Based on Efficient Tracking and Visual Servoing

More information

3D Photography: Stereo

3D Photography: Stereo 3D Photography: Stereo Marc Pollefeys, Torsten Sattler Spring 2016 http://www.cvg.ethz.ch/teaching/3dvision/ 3D Modeling with Depth Sensors Today s class Obtaining depth maps / range images unstructured

More information

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements M. Lourakis and E. Hourdakis Institute of Computer Science Foundation for Research and Technology Hellas

More information

5.2 Surface Registration

5.2 Surface Registration Spring 2018 CSCI 621: Digital Geometry Processing 5.2 Surface Registration Hao Li http://cs621.hao-li.com 1 Acknowledgement Images and Slides are courtesy of Prof. Szymon Rusinkiewicz, Princeton University

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

EVOLUTION OF POINT CLOUD

EVOLUTION OF POINT CLOUD Figure 1: Left and right images of a stereo pair and the disparity map (right) showing the differences of each pixel in the right and left image. (source: https://stackoverflow.com/questions/17607312/difference-between-disparity-map-and-disparity-image-in-stereo-matching)

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Autonomous navigation in industrial cluttered environments using embedded stereo-vision

Autonomous navigation in industrial cluttered environments using embedded stereo-vision Autonomous navigation in industrial cluttered environments using embedded stereo-vision Julien Marzat ONERA Palaiseau Aerial Robotics workshop, Paris, 8-9 March 2017 1 Copernic Lab (ONERA Palaiseau) Research

More information

IFAS Citrus Initiative Annual Research and Extension Progress Report Mechanical Harvesting and Abscission

IFAS Citrus Initiative Annual Research and Extension Progress Report Mechanical Harvesting and Abscission IFAS Citrus Initiative Annual Research and Extension Progress Report 2006-07 Mechanical Harvesting and Abscission Investigator: Dr. Tom Burks Priority Area: Robotic Harvesting Purpose Statement: The scope

More information

A Summary of Projective Geometry

A Summary of Projective Geometry A Summary of Projective Geometry Copyright 22 Acuity Technologies Inc. In the last years a unified approach to creating D models from multiple images has been developed by Beardsley[],Hartley[4,5,9],Torr[,6]

More information

Estimating Camera Position And Posture by Using Feature Landmark Database

Estimating Camera Position And Posture by Using Feature Landmark Database Estimating Camera Position And Posture by Using Feature Landmark Database Motoko Oe 1, Tomokazu Sato 2 and Naokazu Yokoya 2 1 IBM Japan 2 Nara Institute of Science and Technology, Japan Abstract. Estimating

More information

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers

More information

Advanced Reconstruction Techniques Applied to an On-Site CT System

Advanced Reconstruction Techniques Applied to an On-Site CT System 2nd International Symposium on NDT in Aerospace 2010 - We.1.A.4 Advanced Reconstruction Techniques Applied to an On-Site CT System Jonathan HESS, Markus EBERHORN, Markus HOFMANN, Maik LUXA Fraunhofer Development

More information

A High-Speed Iterative Closest Point Tracker on an FPGA Platform

A High-Speed Iterative Closest Point Tracker on an FPGA Platform A High-Speed Iterative Closest Point Tracker on an FPGA Platform by Michael Sweeney Belshaw A thesis submitted to the Department of Electrical & Computer Engineering in conformity with the requirements

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa 3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

H2020 Space Robotic SRC- OG4

H2020 Space Robotic SRC- OG4 H2020 Space Robotic SRC- OG4 2 nd PERASPERA workshop Presentation by Sabrina Andiappane Thales Alenia Space France This project has received funding from the European Union s Horizon 2020 research and

More information

Task analysis based on observing hands and objects by vision

Task analysis based on observing hands and objects by vision Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Space Robotics. Lecture #23 November 15, 2016 Robotic systems Docking and berthing interfaces Attachment mechanisms MARYLAND U N I V E R S I T Y O F

Space Robotics. Lecture #23 November 15, 2016 Robotic systems Docking and berthing interfaces Attachment mechanisms MARYLAND U N I V E R S I T Y O F Lecture #23 November 15, 2016 Robotic systems Docking and berthing interfaces Attachment mechanisms 1 2016 David L. Akin - All rights reserved http://spacecraft.ssl.umd.edu Shuttle Remote Manipulator System

More information

3D Time-of-Flight Image Sensor Solutions for Mobile Devices

3D Time-of-Flight Image Sensor Solutions for Mobile Devices 3D Time-of-Flight Image Sensor Solutions for Mobile Devices SEMICON Europa 2015 Imaging Conference Bernd Buxbaum 2015 pmdtechnologies gmbh c o n f i d e n t i a l Content Introduction Motivation for 3D

More information

SIMULTANEOUS REGISTRATION OF MULTIPLE VIEWS OF A 3D OBJECT Helmut Pottmann a, Stefan Leopoldseder a, Michael Hofer a

SIMULTANEOUS REGISTRATION OF MULTIPLE VIEWS OF A 3D OBJECT Helmut Pottmann a, Stefan Leopoldseder a, Michael Hofer a SIMULTANEOUS REGISTRATION OF MULTIPLE VIEWS OF A 3D OBJECT Helmut Pottmann a, Stefan Leopoldseder a, Michael Hofer a a Institute of Geometry, Vienna University of Technology, Wiedner Hauptstr. 8 10, A

More information

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Javier Romero, Danica Kragic Ville Kyrki Antonis Argyros CAS-CVAP-CSC Dept. of Information Technology Institute of Computer Science KTH,

More information

Camera Calibration for a Robust Omni-directional Photogrammetry System

Camera Calibration for a Robust Omni-directional Photogrammetry System Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,

More information

Marcel Worring Intelligent Sensory Information Systems

Marcel Worring Intelligent Sensory Information Systems Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video

More information

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang MULTI-MODAL MAPPING Robotics Day, 31 Mar 2017 Frank Mascarich, Shehryar Khattak, Tung Dang Application-Specific Sensors Cameras TOF Cameras PERCEPTION LiDAR IMU Localization Mapping Autonomy Robotic Perception

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

IGTF 2016 Fort Worth, TX, April 11-15, 2016 Submission 149

IGTF 2016 Fort Worth, TX, April 11-15, 2016 Submission 149 IGTF 26 Fort Worth, TX, April -5, 26 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 2 Light weighted and Portable LiDAR, VLP-6 Registration Yushin Ahn (yahn@mtu.edu), Kyung In Huh (khuh@cpp.edu), Sudhagar Nagarajan

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

DEVELOPMENT OF A ROBUST IMAGE MOSAICKING METHOD FOR SMALL UNMANNED AERIAL VEHICLE

DEVELOPMENT OF A ROBUST IMAGE MOSAICKING METHOD FOR SMALL UNMANNED AERIAL VEHICLE DEVELOPMENT OF A ROBUST IMAGE MOSAICKING METHOD FOR SMALL UNMANNED AERIAL VEHICLE J. Kim and T. Kim* Dept. of Geoinformatic Engineering, Inha University, Incheon, Korea- jikim3124@inha.edu, tezid@inha.ac.kr

More information

Augmenting Reality, Naturally:

Augmenting Reality, Naturally: Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

An Interactive Technique for Robot Control by Using Image Processing Method

An Interactive Technique for Robot Control by Using Image Processing Method An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.

More information

3D Object Representations. COS 526, Fall 2016 Princeton University

3D Object Representations. COS 526, Fall 2016 Princeton University 3D Object Representations COS 526, Fall 2016 Princeton University 3D Object Representations How do we... Represent 3D objects in a computer? Acquire computer representations of 3D objects? Manipulate computer

More information

Efficient View-Dependent Sampling of Visual Hulls

Efficient View-Dependent Sampling of Visual Hulls Efficient View-Dependent Sampling of Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Computer Graphics Group MIT Laboratory for Computer Science Cambridge, MA 02141 Abstract In this paper

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming

Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming Jae-Hak Kim 1, Richard Hartley 1, Jan-Michael Frahm 2 and Marc Pollefeys 2 1 Research School of Information Sciences and Engineering

More information

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Augmented Reality, Advanced SLAM, Applications

Augmented Reality, Advanced SLAM, Applications Augmented Reality, Advanced SLAM, Applications Prof. Didier Stricker & Dr. Alain Pagani alain.pagani@dfki.de Lecture 3D Computer Vision AR, SLAM, Applications 1 Introduction Previous lectures: Basics (camera,

More information

On-line and Off-line 3D Reconstruction for Crisis Management Applications

On-line and Off-line 3D Reconstruction for Crisis Management Applications On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be

More information