A 3D Representation of Obstacles in the Robot s Reachable Area Considering Occlusions

Size: px
Start display at page:

Download "A 3D Representation of Obstacles in the Robot s Reachable Area Considering Occlusions"

Transcription

1 A 3D Representation of Obstacles in the Robot s Reachable Area Considering Occlusions Angelika Fetzner, Christian Frese, Christian Frey Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, Germany Abstract For human-robot interaction, a 3D representation of the robot s close environment based on depth measurements is proposed. First, the current close range is defined by computing the reachable space of the robot. Then obstacles and occluded space within the close range are represented in an octree structure considering obstacle occlusions as well as robot occlusions. Based on the obstacle octree, the minimum distance between the robot and the surrounding obstacles is determined. Experimental results with depth sensor data from multiple on-board sensors of a mobile manipulator are shown. 1 Introduction Current robot manufacturing applications ensure safety by strictly separating humans from moving robots, which is achieved by fences or other physical barriers. If nevertheless a human enters the robot workspace, the robot is immediately stopped. These safety installations prohibit shared workspaces of humans and robots as well as physical human-robot interaction. To overcome these issues, in the SAPHARI project new monitoring methods for an industrial human-robot interaction scenario are developed. This scenario compiles the most important components that typical applications share with regard to sensing and perception and is described in the following: A mobile manipulator, consisting of an omni-directional mobile platform and a 7 DoF manipulator, performs an assembly task, where it picks up several parts from one workbench and moves to a second workbench. On its way and at the second workbench the parts are assembled. Finally, the robot drives to a human coworker and hands him the assembled part. During the task execution an uninvolved human worker crosses the robot workcell. Based on workspace monitoring, dangerous situations have to be detected and undesired collisions between the robot and its environment have to be avoided. To ensure safety in human-robot interaction, a 3D representation of the robot s close surroundings is a fundamental prerequisite. Based on the 3D environment representation, the minimum distance between the robot and obstacles can be computed and collision detection and avoidance strategies can be performed. Therefore, the 3D representation has to meet the following requirements: The representation has to include static as well as dynamic obstacles. Furthermore, space occluded by the robot or by other objects has to be considered. To reduce the occluded and unobserved space, information from multiple heterogeneous sensors has to be fused. For the scenario considered here, the methods have to be applicable to mobile manipulators and depth sensors mounted on the robot. For safety reasons, the monitoring of the robot environment has to meet real-time requirements, in order to enable quick reactions to sudden dangers. At the same time, 3D monitoring usually leads to high computational costs. Therefore, the 3D monitoring is restricted here to a close spatial range and short time horizons to reduce the computational costs. In the following, this space is referred to as close range. In regions with higher distances to the robot (far range), less expensive 2D or 2 1 /2D algorithms for workspace monitoring are applied [1, 2]. The paper is organized as follows: After a short overview of the related work in Section 2, the problem of selecting and placing suitable senors to monitor the close robot environment in the presented scenario is discussed in Section 3. In Section 4, the 3D close range monitoring concept is explained that consists of the computation of the robot s close range as the reachable area, the 3D representation of obstacles within the close range and the minimum distance computation. Finally, experimental results are shown in Section 5. 2 Related Work For obstacle detection, different sensors have been recommended, e.g., color cameras [3], and depth sensors [4, 5, 6]. Their optimal placement has been studied as 2D coverage problem [7] and as 3D coverage problem [8]. According to [9], a good coverage of a robot workcell is achieved if the sensors are placed high in the corners of the workcell. In [10] an algorithm for the optimal placement of presence and depth sensors for a stationary robot arm is proposed considering dynamic obstacles. These approaches usually are suitable for the placement of stationary sensors. As in the case of a mobile manipulator in the scenario described above the sensor placement 85

2 is not restricted to sensors installed in the workspace, here, the problem of selecting and placing suitable sensors is discussed with regard to mobile manipulators. For monitoring obstacles in the robot workspace and detecting collisions with these obstacles different approaches exist. Several proposed methods are based on color camera systems, as e.g., [3, 11]. As color cameras do not directly provide depth information, these approaches usually require background substraction as in [3] or detect only dedicated objects as e.g., humans in [11]. These drawbacks can be overcome by using depth sensors. Based on a single depth camera, in [12] a depth space approach to compute the minimum distance between the robot and obstacles considering occlusions is proposed. As this method works directly in the sensor s depth space, it is very fast, but the fusion of information from multiple sensors is not possible. The minimum distance to the robot based on two depth cameras is computed in [6]. To achieve real-time performance, the background is removed from the retrieved depth image using a reference background image. The remaining depth points are converted into a point cloud in the robot base frame. Point clouds from different sensors are merged and the minimum distance is computed without considering occlusions. In [9] the robot surroundings within the robot workcell are represented by a volumetric evidence grid that is computed by fusing multiple 3D imaging sensors. The occupied space is classified into background, robot, and humans, assuming that the background is a-priori known and that all foreground objects are humans. Based on convex hulls, the minimum distance between the robot and obstacles is determined in [4]. Occlusions are considered within the surveilled space that is specified as the intersection of the visible volumes of all available sensors. In [13], representing the robot environment by octrees is proposed. The approach distinguishes between free, occupied, and unknown space, but with the objective of mapping large environments. Our approach for a 3D obstacle representation is founded on octrees as well. But obstacle occlusions and robot occlusions are explicitly considered, similiar to [4]. With regard to a fast robot-obstacle distance computation, the considered area is restricted to the relevant close range of the robot. Other methods like the background subtraction mentioned in [6] cannot easily be applied to reduce the computational costs in the considered scenario, as it is more difficult to gain knowledge on the background in the case of non-stationary sensors on a mobile platform. 3 Sensor setup For acquiring a 3D obstacle representation in humanrobot interaction on- and off-board sensors are conceivable which enable the robot to perceive its environment. First, different sensor categories are considered for the environment perception. Then the sensors are simulated in a generalized way in order to find an appropriate sensor selection and placement. 3.1 Categorization of Safety Relevant Sensors In the following, relevant sensors are categorized according to the measured information. Tactile sensors measure a direct physical contact and therefore are suitable for monitoring the proximate robot surroundings. Presence sensors indicate if there is an object within a specified area. The object presence can be detected either through a physical contact (e.g., bumpers) or without physical contact (e.g., light arrays). Depth sensors provide distance information. Examples for depth sensors are 2D/3D laser scanners, time-of-flight cameras, triangulation-based sensors, and stereo camera systems. Intensity cameras provide color or gray scale intensity information in the visible as well as in the invisible frequency domain, e.g., in the infrared spectrum for detecting humans. 3.2 Simulation of sensor characteristics In order to analyze the capabilities of the different sensor types, a 3D simulation of the scenario described in Section 1 is set up. As simulation tool the 3D rigid body simulator Gazebo [14] is used in conjunction with the Robot Operating System (ROS) [15]. Gazebo is capable of simulating static and dynamic objects (e.g., robots, environment) and sensors. With the previously defined sensor categories, sensors can be simulated independently of the real sensor; only the sensor category and the sensor properties like the field of view, range, and resolution have to be specified. Using this approach, sensor data can be generated, and the monitored sections of the robot environment can be visualized without a real hardware setup of the sensor system. Sets of different sensors can be evaluated to find occlusions during a test run of the scenario. 3.3 Selection of Safety Relevant Sensors for Human-Robot Interaction For monitoring the robot environment with regard to collision avoidance, the application of depth sensors is most suitable, as the measured information directly delivers 3D point clouds of the environment. Intensity cameras and presence sensors, in contrast, provide only information projected to the image plane. In principle, the depth sensors can be mounted on the robot or can be placed in the workcell. Selecting and placing 86

3 Conference ISR ROBOTIK 2014 suitable sensor is a non-trivial task, as the number of insertable sensors is restricted due to purchase costs, energy consumption (especially on mobile robots), and the required computing capacity. If the sensors are mounted in the workcell, the main working area can be covered by several sensors with different points of view to reduce the risk of occlusions. This approach is suitable for stationary robot arms and mobile robots in small limited workcells, whereas to fully cover large workspaces of mobile robots, a huge amount of sensors or sensors with long ranges and large fields of view are needed. Furthermore, this approach makes high demands on the robot localization, if the robot has to be removed from the sensor data as described in Section Otherwise, if the sensors are placed on the robot, only a smaller region around the robot has to be covered by the sensors and the sensor placement is independent of the workcell size. Therefore, in the chosen sensor setup the sensors are mounted on the mobile platform. Figure 2: Sensor simulation with visualization of measurement ranges. 4 3D Close Range Monitoring Due to limited computational resources, the detailed 3D representation of the robot environment is restricted to the close range of the robot. To specify the close range, the space that is reachable by the robot within a certain time horizon is computed. Within the close range, the space occupied or occluded by obstacles is represented in an octree. Based on this octree, the minimum distance between the robot and the surrounding objects is determined. 3.4 Experimental Sensor Setup The mobile manipulator an omni-directional platform with a 7 DoF robot arm is equipped with two 2D laser scanners and two 3D depth cameras (Kinect sensors) as shown in Figure 1. Based on the sensor simulation, the monitored area of each sensor is visualized in Figure 2. The two 2D laser scanners, each with a 270 horizontal field of view (blue), are mounted at two edges of the mobile platform and monitor the space around the platform in a plane near the ground floor. The two Kinect sensors (yellow and green) monitor the surroundings of the manipulator. Each Kinect sensor provides a triangulation-based depth camera with opening angles of and a usable measurement range from 0.6 m to approximately 7 m. They are placed in order to achieve a high sensing volume and small occlusions. 4.1 Reachable Area The robot s close range is defined here as the space that can be reached by at least one part of the robot within a certain time horizon based on the current robot state and the maximum velocities. To represent the close range, a reachability grid is computed. In [16] a method is proposed to compute the reachability grid of a manipulator. Each manipulator link is represented by a point cloud. Beginning with the end effector link, the link point cloud is rotated around the corresponding joint through the maximum possible range of motion specified by the time horizon and the joint constraints. For each rotated point, the minimum time to reach this point is calculated. The resulting point cloud is reduced by collapsing it into a voxel grid. Then, the reduced point cloud is attached to the next link point cloud and the former steps are repeated. Due to the voxilization step at each joint the method achieves an acceptable computation time even for high DoF manipulators. To apply this approach to a mobile manipulator with an omni-directional platform, it is extended by translational motions analogue to the described rotational motion. The omni-directional platform is therefore modeled as one link, a rotational joint, and two prismatic joints for the translational movements. The considered motion constraints are the joint limits and the maximum joint and platform velocities. The robot s close range specified by the reachability grid is time variant and is computed on-line. The close range of the LWR manipulator and the close range of the mobile Figure 1: Sensor setup. 87

4 Conference ISR ROBOTIK Obstacle Octree platform including the arm based on the represented robot state are illustrated in Figure 3 and Figure 4, respectively. The minimum time to reach a cell is encoded by color. The time horizon is set to 0.5 s, the maximum joint velocities correspond to the manufacturer s data. Based on the data of multiple depth sensors an octree representation of obstacles in the close range of the robot is computed. For safety reasons the representation has to include obstacle occlusions, especially if an obstacle can occur in the space between the robot and the sensor. The acceleration constraints are disregarded, as the maximum accelerations of the robot are comparatively high. Furthermore, the error due to the assumption of infinite acceleration leads only to an overestimate of the reachable space, which is uncritical for the close range definition. As the method for the reachability computation treats all joint motions independently, it is not able to recognize invalid robot configurations (e.g., due to self-collisions). But this results, like other simplifications, only in an overestimate of the reachable area. The computational load can be further reduced, if the resulting grid contains only the information, if a cell is reachable or not, instead of the minimum time to reach the cell Octree Data Structure An octree is a hierarchical data structure. The threedimensional space is represented by a set of nodes where each node corresponds to a voxel. Each node is recursively subdivided into eight sub-voxels (nodes), if necessary, till the minimum voxel size is reached [17, 13]. With the OctoMap library [13] an open source framework exists that provides amongst others an octree data structure and ray tracing methods Sensor Model All depth sensors, as e.g., laser scanners, triangulation based sensors, or time-of-flight cameras, are modeled in a generalized way by a ray based model. The sensor model is described by a set of rays beginning in the sensor origin. The rays are specified by the opening angles and resolution of the sensor. The sensor s field of view is defined by the opening angles and the minimum and maximum range. The sensor measures the distance between the sensor origin and the point, where a ray hits the first object. In the following, these points are referred to as object points. It is assumed, that the above mentioned sensor properties opening angles, maximum range, and resolution are known. Furthermore, it is assumed that the transformation between the sensor origin and the robot base frame is known. In the case of a mobile platform with sensors mounted on the platform, the transformation between sensor and robot base frame is time invariant and determined by a calibration procedure. Otherwise, if the sensors are installed in the workspace, additionally, the robot pose has to be provided. For each sensor i the set Vi (t) of all octree nodes that are located in the sensor field of view Figure 3: Visualization of the reachable area for the manipulator only, considering a time horizon of 0.5 s. Vi (t) = {n T n in the field of view of sensor i} (1) is determined in each time-step t, where T is an octree in the current robot base frame, limited to a certain volume expansion and n is an octree node. The octree bounds are chosen in a way that the robot s close range is always located within these bounds. In the case of a static transform between the sensor and the robot base frame, Vi is time independent and has to be computed only once Sensor Data Processing Figure 4: Visualization of the reachable area for the platform with manipulator, considering a time horizon of 0.5 s. If the captured sensor data contains besides obstacle points also points on the robot, the sensor data has to be filtered 88

5 to distinguish between obstacle and robot, before the captured depth sensor data can be used to create an obstacle representation. Several filters exist that remove robot points from depth sensor data based on the known robot model, as e.g. the Realtime URDF Filter [18] for depth images and the Robot Self Filter from MoveIt! [19] for point clouds. The retrieved depth data may be available in different formats, e.g., as depth image from a Kinect-like sensor, as a list of distance measures from a laser scanner or as point cloud. To obtain a sensor independent representation, a robot and an obstacle point cloud in the sensor frame is generated from the retrieved depth data. Even though the data is in the sensor frame, points that are located outside the octree s bounding box can be partly removed to reduce the computational costs. Therefore, the maximum distance between the sensor origin and the bounding box edges is computed. All points with a depth measurement higher than this distance are certainly outside the bounding box. Then, the point clouds are transformed to the robot base frame and the remaining points outside the bounding box are filtered out. Based on the pre-processed obstacle point clouds, for each sensor i, the set P i (t) of all octree nodes that contain at least one obstacle point is determined. By means of ray tracing the nodes that are occupied or occluded by obstacle points O i (t) and the nodes occupied or occluded by robot points R i (t) are computed. Then for each sensor the free space F i (t) is given as the space in the field of view without the areas occupied or occluded by an obstacle or by the robot. F i (t) = V i (t)\(o j (t) R j (t)). (2) Information Fusion Based on the reachability grid for the current joint state, all nodes that are reachable within a certain time horizon are included in the set C(t). The sensor data pre-processing and the computation of occupied and occluded nodes is done independently for each sensor in parallel processes. To obtain the final obstacle octree, the information about occupied and occluded nodes is fused according to O(t) = ( ( )) C(t) P i (t) Õi(t) (3) i space is not considered as obstacle, as it is assumed that the sensor setup provides sufficient information to avoid dangers. This assumption is necessary, as sensor configurations used for obstacle detection often do not allow monitoring all space around the robot, especially for mobile platforms. E.g., with the sensor setup shown in Section 3.4, the space at the sides of the mobile platform is only surveyed by 2D laser scanners in one plane. But the remaining risk due to this sensor setup is assessed as acceptable, as if a human is standing next to the platform, at least a part of the human is detected by the laser scanners. Similarly, space only occluded by the robot but not occluded by an obstacle is not considered as obstacle as well. The proposed principle is illustrated in Figure 5 on the basis of two depth sensors. The scene contains the known robot (blue object) and an obstacle (red object). The two upper drafts show the nodes in the sensor s field of view (green), the nodes occupied or occluded by an obstacle (red), and the nodes occupied or occluded by the robot (blue) for the two depth sensors. The lower image shows the fusion result, in which the red cells correspond to the occupied nodes in the final octree. By fusing multiple sensors, the monitored space is increased and occlusions of the robot as well as of obstacles are reduced. The second example in Figure 6 illustrates an octree with regard to regions occluded by an obstacle as well as by the robot. It points up the necessity of being aware of robot occlusions instead of treating the robot and its occlusions as free space. As for safety reasons a conservative assumption of obstacle dimension is required, regions occluded by robot and obstacle that are not detected as free by another sensor are handled as obstacle. with Õ i (t) = O i (t)\ F j (t). (4) j,j i In the final octree all nodes are marked as occupied, that include a directly measured reachable obstacle point and all reachable nodes that are in the occluded space of one sensor and are not detected as free by another sensor. That means the monitored space is the union of all sensor fields of view within the reachable area. Unsupervised Figure 5: Illustration of the obstacle octree generation principle with two sensors (first example). 89

6 Figure 6: Illustration of the obstacle octree generation principle with two sensors (second example). 4.3 Distance Computation Based on the resulting obstacle octree containing all nodes occupied or occluded by obstacles within the reachable area, the minimum distance between sample points on the robot and the centers of the obstacle nodes is computed. The computed distance between the centers of the obstacle nodes and a sample point usually differs from the real distance between the obstacle and the sample point. Assuming that a node is fully occupied by an obstacle or only a corner of the node is occupied, the distance computation relating to the node s center leads to a maximum error e max = ± 3 2 r (5) where r represents the octree resolution. The maximum error has to be considered by the collision avoidance strategies that use the computed distance. Due to occlusions, the computed distance can also be significantly smaller than the real distance to an obstacle. This feature, however, is desired with regard to safety issues. 5 Experimental Results The proposed method has been applied to sensor data acquired from the setup described in Section 3.4. The two laser scanners, that observe the area on a horizontal plane around the platform near the ground floor, are updated with 12.5 Hz. The two Kinect sensors, that monitor the surroundings of the robot arm, are operated with an update rate of 15 Hz. Their depth images are filtered by the Realtime URDF Filter [18] to distinguish between robot and other objects. This filter renders a virtual depth image based on the known robot model and the current robot state. Comparing the real depth image retrieved from the sensor with the virtual depth image allows for deciding if the pixels represent a part of the robot or of another object. In Figure 7 snap-shots of an experimental result are shown. During the experiment, a human was walking to the left of the robot and reaching out for the robot. Behind the robot there were several static obstacles. The snapshots show the current robot state and the obstacle octree with a resolution of 10 cm, whereby for better understanding the occupied nodes are visualized red, the occluded nodes transparent. The computed minimum distance between the obstacle octree and the robot is marked by a yellow line. The computation time for the octree generation depends on the amount of object points in the robot s close range and the size of the occluded space. In the executed experiments on a 2.8 GHz Intel Core i7 quad-core processor, the computation times averaged 10 ms for the reachability grid, 40 ms for the URDF filter, 80 ms to compute the occupied and occluded obstacle nodes of a Kinect sensor, 8 ms to compute the nodes occluded by the robot in a Kinect s field of view, 1 ms to compute the occupied and occluded obstacle nodes of a laser scanner, 2 ms to fuse the data in the resulting obstacle octree and 13 ms to compute the minimum distance to the robot regarding 462 sample points on the robot. 6 Conclusions In this paper the robot s close range is defined as the space that is reachable by at least one robot link within a certain time horizon considering the current robot state and the maximum velocities. Within the close range, obstacles are represented in an obstacle octree that considers both obstacle and robot occlusions. The octree is computed by fusing information from multiple heterogeneous depth sensors, whereby the monitored space is enlarged and occlusions are reduced. The minimum distance between obstacles represented by the obstacle octree and robot is computed in order to reduce the robot velocity or to apply other collision avoidance strategies. The method is applicable to all kinds of depth sensors that deliver a dense measurement. Future work will be conducted on using the obstacle representation for collision avoidance during human-robot interaction in an industrial use case investigated in the SAPHARI project. Acknowledgement This work has been funded by the European Commission s 7th Framework Programme as part of the project SAPHARI under grant agreement ICT

7 Conference ISR ROBOTIK 2014 Figure 7: Snap-shot sequence of experimental results. Nodes of the obstacle octree containing detected obstacle points are visualized red, occluded nodes transparent. The minimum distance between the robot and the obstacle octree is shown as yellow line. References [5] C. Walter, C. Vogel and N. Elkmann: A Stationary Sensor System to Support Manipulators for Safe Human-Robot Interaction, Joint 41st Int. Symposium on Robotics and 6th German Conf. on Robotics (ISR/ROBOTIK), pp.1-6, [1] C. Frese, A. Fetzner and C. Frey: Multi-Sensor Obstacle Tracking for Safe Human-Robot Interaction, Joint 45th Int. Symposium on Robotics and 8th German Conf. on Robotics (ISR/ROBOTIK), [6] Lihui Wang: Collaborations towards adaptive manufacturing, IEEE 16th Int. Conf. on Computer Supported Cooperative Work in Design (CSCWD), pp.14-21, [2] A. Fetzner, C. Frese and C. Frey: Obstacle Detection and Tracking for Safe Human-Robot Interaction Based on Multi-Sensory Point Clouds, 6th Int. Workshop on Human-Friendly Robotics (HFR), Rome, [7] J.-J. Gonzalez-Barbosa, T. Garcia-Ramirez, J. Salas, J.-B. Hurtado-Ramos and J.-d.-J. Rico-Jimenez: Optimal camera placement for total coverage, IEEE Int. Conf. on Robotics and Automation (ICRA), pp , [3] D. Henrich and T. Gecks: Multi-camera collision detection between known and unknown objects, Second ACM/IEEE Int. Conf. on Distributed Smart Cameras (ICDSC), [8] E. Becker, G. Guerra-Filho and F. Makedon: Automatic sensor placement in a 3D volume, PETRA 09, [4] M. Fischer and D. Henrich: 3D Collision Detection for Industrial Robots and Unknown Obstacles Using Multiple Depth Images, In T. Kröger and F. Wahl, Advances in Robotics Research - Theory, Implementation, Application, Springer, [9] P. Rybski, P. Anderson-Sprecher, D. Huber, C. Niessl and R. Simmons: Sensor fusion for human safety 91

8 in industrial workcells, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp , [10] F. Flacco and A. De Luca: Multiple depth/presence sensors: Integration and optimal placement for human/robot coexistence, IEEE Int. Conf. on Robotics and Automation (ICRA), [11] J.T.C. Tan, Feng Duan, Ye Zhang, R. Kato and T. Arai: Safety design and development of humanrobot collaboration in cellular manufacturing, IEEE Int. Conf. on Automation Science and Engineering (CASE), pp , [12] F. Flacco, T. Kroger, A. De Luca and O. Khatib: A depth space approach to human-robot collision avoidance, IEEE Int. Conf. on Robotics and Automation (ICRA), pp , [13] A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss and W. Burgard: OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees, Autonomous Robots, [14] Gazebo, [15] Robot Operating System (ROS), [16] P. Anderson-Sprecher and R. Simmons: Voxelbased motion bounding and workspace estimation for robotic manipulators, IEEE Int. Conf. on Robotics and Automation (ICRA), pp , [17] C. L. Jackins and S. L. Tanimoto: Oct-trees and their use in representing three-dimensional objects, Computer Graphics and Image Processing, pp , [18] N. Blodow: Realtime URDF Filter, [19] I. A. Sucan and S. Chitta: MoveIt!, 92

Real-Time Computation of Distance to Dynamic Obstacles With Multiple Depth Sensors Flacco Fabrizio and Alessandro De Luca

Real-Time Computation of Distance to Dynamic Obstacles With Multiple Depth Sensors Flacco Fabrizio and Alessandro De Luca 56 IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 2, NO. 1, JANUARY 2017 Real-Time Computation of Distance to Dynamic Obstacles With Multiple Depth Sensors Flacco Fabrizio and Alessandro De Luca Abstract We

More information

Semantic Mapping and Reasoning Approach for Mobile Robotics

Semantic Mapping and Reasoning Approach for Mobile Robotics Semantic Mapping and Reasoning Approach for Mobile Robotics Caner GUNEY, Serdar Bora SAYIN, Murat KENDİR, Turkey Key words: Semantic mapping, 3D mapping, probabilistic, robotic surveying, mine surveying

More information

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds 1 Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds Takeru Niwa 1 and Hiroshi Masuda 2 1 The University of Electro-Communications, takeru.niwa@uec.ac.jp 2 The University

More information

Technischer Bericht TUM. Institut für Informatik. Technische Universität München. Fusing multiple Kinects to survey shared Human-Robot-Workspaces

Technischer Bericht TUM. Institut für Informatik. Technische Universität München. Fusing multiple Kinects to survey shared Human-Robot-Workspaces TUM TECHNISCHE UNIVERSITÄT MÜNCHEN INSTITUT FÜR INFORMATIK Fusing multiple Kinects to survey shared Human-Robot-Workspaces Claus Lenz, Markus Grimm, Thorsten Röder, Alois Knoll TUM-I114 Technischer Bericht

More information

THE most common approach for an artificial system to

THE most common approach for an artificial system to This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 1.119/LRA.216.2535859,

More information

3D Collision Avoidance for Navigation in Unstructured Environments

3D Collision Avoidance for Navigation in Unstructured Environments 3D Collision Avoidance for Navigation in Unstructured Environments Armin Hornung Humanoid Robots Lab, University of Freiburg May 5, 2011 Motivation Personal robots are to operate in arbitrary complex environments:

More information

Intelligent Monitoring of Assembly Operations. Peter Anderson-Sprecher

Intelligent Monitoring of Assembly Operations. Peter Anderson-Sprecher Intelligent Monitoring of Assembly Operations Peter Anderson-Sprecher CMU-RI-TR-02-03 Submitted in partial fulfillment of the requirements for the degree of Masters in Robotics The Robotics Institute Carnegie

More information

Visual Perception for Robots

Visual Perception for Robots Visual Perception for Robots Sven Behnke Computer Science Institute VI Autonomous Intelligent Systems Our Cognitive Robots Complete systems for example scenarios Equipped with rich sensors Flying robot

More information

Master s Thesis: Real-Time Object Shape Perception via Force/Torque Sensor

Master s Thesis: Real-Time Object Shape Perception via Force/Torque Sensor S. Rau TAMS-Oberseminar 1 / 25 MIN-Fakultät Fachbereich Informatik Master s Thesis: Real-Time Object Shape Perception via Force/Torque Sensor Current Status Stephan Rau Universität Hamburg Fakultät für

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

Motion Planning. Howie CHoset

Motion Planning. Howie CHoset Motion Planning Howie CHoset Questions Where are we? Where do we go? Which is more important? Encoders Encoders Incremental Photodetector Encoder disk LED Photoemitter Encoders - Incremental Encoders -

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

AMR 2011/2012: Final Projects

AMR 2011/2012: Final Projects AMR 2011/2012: Final Projects 0. General Information A final project includes: studying some literature (typically, 1-2 papers) on a specific subject performing some simulations or numerical tests on an

More information

Exploiting Depth Camera for 3D Spatial Relationship Interpretation

Exploiting Depth Camera for 3D Spatial Relationship Interpretation Exploiting Depth Camera for 3D Spatial Relationship Interpretation Jun Ye Kien A. Hua Data Systems Group, University of Central Florida Mar 1, 2013 Jun Ye and Kien A. Hua (UCF) 3D directional spatial relationships

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

Keywords: clustering, construction, machine vision

Keywords: clustering, construction, machine vision CS4758: Robot Construction Worker Alycia Gailey, biomedical engineering, graduate student: asg47@cornell.edu Alex Slover, computer science, junior: ais46@cornell.edu Abstract: Progress has been made in

More information

Ceilbot vision and mapping system

Ceilbot vision and mapping system Ceilbot vision and mapping system Provide depth and camera data from the robot's environment Keep a map of the environment based on the received data Keep track of the robot's location on the map Recognize

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

arxiv: v1 [cs.ro] 20 Jul 2018

arxiv: v1 [cs.ro] 20 Jul 2018 Considering Human Behavior in Motion Planning for Smooth Human-Robot Collaboration in Close Proximity Xuan Zhao, Jia Pan arxiv:1807.07749v1 [cs.ro] 20 Jul 2018 Abstract It is well-known that a deep understanding

More information

Virtual Range Scan for Avoiding 3D Obstacles Using 2D Tools

Virtual Range Scan for Avoiding 3D Obstacles Using 2D Tools Virtual Range Scan for Avoiding 3D Obstacles Using 2D Tools Stefan Stiene* and Joachim Hertzberg Institute of Computer Science, Knowledge-Based Systems Research Group University of Osnabrück Albrechtstraße

More information

Learning Semantic Environment Perception for Cognitive Robots

Learning Semantic Environment Perception for Cognitive Robots Learning Semantic Environment Perception for Cognitive Robots Sven Behnke University of Bonn, Germany Computer Science Institute VI Autonomous Intelligent Systems Some of Our Cognitive Robots Equipped

More information

Navigation in Three-Dimensional Cluttered Environments for Mobile Manipulation

Navigation in Three-Dimensional Cluttered Environments for Mobile Manipulation Navigation in Three-Dimensional Cluttered Environments for Mobile Manipulation Armin Hornung Mike Phillips E. Gil Jones Maren Bennewitz Maxim Likhachev Sachin Chitta Abstract Collision-free navigation

More information

Autonomous navigation in industrial cluttered environments using embedded stereo-vision

Autonomous navigation in industrial cluttered environments using embedded stereo-vision Autonomous navigation in industrial cluttered environments using embedded stereo-vision Julien Marzat ONERA Palaiseau Aerial Robotics workshop, Paris, 8-9 March 2017 1 Copernic Lab (ONERA Palaiseau) Research

More information

View Planning for 3D Object Reconstruction with a Mobile Manipulator Robot

View Planning for 3D Object Reconstruction with a Mobile Manipulator Robot View Planning for 3D Object Reconstruction with a Mobile Manipulator Robot J. Irving Vasquez-Gomez, L. Enrique Sucar, Rafael Murrieta-Cid National Institute for Astrophysics, Optics and Electronics (INAOE),

More information

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz Humanoid Robotics Inverse Kinematics and Whole-Body Motion Planning Maren Bennewitz 1 Motivation Planning for object manipulation Whole-body motion to reach a desired goal configuration Generate a sequence

More information

#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS

#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS #65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS Final Research Report Luis E. Navarro-Serment, Ph.D. The Robotics Institute Carnegie Mellon University Disclaimer The contents

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu University of California, Riverside krystof@litomisky.com, bhanu@ee.ucr.edu Abstract. Three-dimensional simultaneous localization

More information

Voxel-Based Motion Bounding and Workspace Estimation for Robotic Manipulators

Voxel-Based Motion Bounding and Workspace Estimation for Robotic Manipulators Voxel-Based Motion Bounding and Workspace Estimation for Robotic Manipulators Peter Anderson-Sprecher and Reid Simmons Abstract Identification of regions in space that a robotic manipulator can reach in

More information

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz Humanoid Robotics Inverse Kinematics and Whole-Body Motion Planning Maren Bennewitz 1 Motivation Plan a sequence of configurations (vector of joint angle values) that let the robot move from its current

More information

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of

More information

Real time 3D reconstruction of non-structured domestic environment for obstacle avoidance using multiple RGB-D cameras

Real time 3D reconstruction of non-structured domestic environment for obstacle avoidance using multiple RGB-D cameras Maig 2015 Màster universitari en Automàtica i Robòtica Jordi Magdaleno Maltas Treball de Fi de Màster Màster universitari en Automàtica i Robòtica Real time 3D reconstruction of non-structured domestic

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Visual Perception Sensors

Visual Perception Sensors G. Glaser Visual Perception Sensors 1 / 27 MIN Faculty Department of Informatics Visual Perception Sensors Depth Determination Gerrit Glaser University of Hamburg Faculty of Mathematics, Informatics and

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing

More information

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Christian Weiss and Andreas Zell Universität Tübingen, Wilhelm-Schickard-Institut für Informatik,

More information

Laserscanner Based Cooperative Pre-Data-Fusion

Laserscanner Based Cooperative Pre-Data-Fusion Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel

More information

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

DISTANCE MEASUREMENT USING STEREO VISION

DISTANCE MEASUREMENT USING STEREO VISION DISTANCE MEASUREMENT USING STEREO VISION Sheetal Nagar 1, Jitendra Verma 2 1 Department of Electronics and Communication Engineering, IIMT, Greater Noida (India) 2 Department of computer science Engineering,

More information

ROS-Industrial Basic Developer s Training Class

ROS-Industrial Basic Developer s Training Class ROS-Industrial Basic Developer s Training Class Southwest Research Institute 1 Session 4: More Advanced Topics (Descartes and Perception) Southwest Research Institute 2 MOVEIT! CONTINUED 3 Motion Planning

More information

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers

More information

On-line and Off-line 3D Reconstruction for Crisis Management Applications

On-line and Off-line 3D Reconstruction for Crisis Management Applications On-line and Off-line 3D Reconstruction for Crisis Management Applications Geert De Cubber Royal Military Academy, Department of Mechanical Engineering (MSTA) Av. de la Renaissance 30, 1000 Brussels geert.de.cubber@rma.ac.be

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

Stereoscopic Vision System for reconstruction of 3D objects

Stereoscopic Vision System for reconstruction of 3D objects Stereoscopic Vision System for reconstruction of 3D objects Robinson Jimenez-Moreno Professor, Department of Mechatronics Engineering, Nueva Granada Military University, Bogotá, Colombia. Javier O. Pinzón-Arenas

More information

Team Description Paper Team AutonOHM

Team Description Paper Team AutonOHM Team Description Paper Team AutonOHM Jon Martin, Daniel Ammon, Helmut Engelhardt, Tobias Fink, Tobias Scholz, and Marco Masannek University of Applied Science Nueremberg Georg-Simon-Ohm, Kesslerplatz 12,

More information

Robotized Assembly of a Wire Harness in Car Production Line

Robotized Assembly of a Wire Harness in Car Production Line The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Robotized Assembly of a Wire Harness in Car Production Line Xin Jiang, Member, IEEE, Kyong-mo

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Lecture: Autonomous micro aerial vehicles

Lecture: Autonomous micro aerial vehicles Lecture: Autonomous micro aerial vehicles Friedrich Fraundorfer Remote Sensing Technology TU München 1/41 Autonomous operation@eth Zürich Start 2/41 Autonomous operation@eth Zürich 3/41 Outline MAV system

More information

Planning in Mobile Robotics

Planning in Mobile Robotics Planning in Mobile Robotics Part I. Miroslav Kulich Intelligent and Mobile Robotics Group Gerstner Laboratory for Intelligent Decision Making and Control Czech Technical University in Prague Tuesday 26/07/2011

More information

A multi-camera positioning system for steering of a THz stand-off scanner

A multi-camera positioning system for steering of a THz stand-off scanner A multi-camera positioning system for steering of a THz stand-off scanner Maria Axelsson, Mikael Karlsson and Staffan Rudner Swedish Defence Research Agency, Box 1165, SE-581 11 Linköping, SWEDEN ABSTRACT

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Lecture 8 Active stereo & Volumetric stereo

Lecture 8 Active stereo & Volumetric stereo Lecture 8 Active stereo & Volumetric stereo In this lecture, we ll first discuss another framework for describing stereo systems called active stereo, and then introduce the problem of volumetric stereo,

More information

LEARNING NAVIGATION MAPS BY LOOKING AT PEOPLE

LEARNING NAVIGATION MAPS BY LOOKING AT PEOPLE LEARNING NAVIGATION MAPS BY LOOKING AT PEOPLE Roger Freitas,1 José Santos-Victor Mário Sarcinelli-Filho Teodiano Bastos-Filho Departamento de Engenharia Elétrica, Universidade Federal do Espírito Santo,

More information

Measurement of 3D Foot Shape Deformation in Motion

Measurement of 3D Foot Shape Deformation in Motion Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

Volumetric Scene Reconstruction from Multiple Views

Volumetric Scene Reconstruction from Multiple Views Volumetric Scene Reconstruction from Multiple Views Chuck Dyer University of Wisconsin dyer@cs cs.wisc.edu www.cs cs.wisc.edu/~dyer Image-Based Scene Reconstruction Goal Automatic construction of photo-realistic

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Probabilistic Motion and Sensor Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Sensors for Mobile

More information

Three-dimensional Underwater Environment Reconstruction with Graph Optimization Using Acoustic Camera

Three-dimensional Underwater Environment Reconstruction with Graph Optimization Using Acoustic Camera Three-dimensional Underwater Environment Reconstruction with Graph Optimization Using Acoustic Camera Yusheng Wang 1, Yonghoon Ji 2, Hanwool Woo 1, Yusuke Tamura 1, Atsushi Yamashita 1, and Hajime Asama

More information

Exploration of an Indoor-Environment by an Autonomous Mobile Robot

Exploration of an Indoor-Environment by an Autonomous Mobile Robot IROS '94 September 12-16, 1994 Munich, Germany page 1 of 7 Exploration of an Indoor-Environment by an Autonomous Mobile Robot Thomas Edlinger edlinger@informatik.uni-kl.de Ewald von Puttkamer puttkam@informatik.uni-kl.de

More information

Autonomous Navigation of Nao using Kinect CS365 : Project Report

Autonomous Navigation of Nao using Kinect CS365 : Project Report Autonomous Navigation of Nao using Kinect CS365 : Project Report Samyak Daga Harshad Sawhney 11633 11297 samyakd@iitk.ac.in harshads@iitk.ac.in Dept. of CSE Dept. of CSE Indian Institute of Technology,

More information

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization M. Shahab Alam, M. Usman Rafique, and M. Umer Khan Abstract Motion planning is a key element of robotics since it empowers

More information

Field-of-view dependent registration of point clouds and incremental segmentation of table-tops using time-offlight

Field-of-view dependent registration of point clouds and incremental segmentation of table-tops using time-offlight Field-of-view dependent registration of point clouds and incremental segmentation of table-tops using time-offlight cameras Dipl.-Ing. Georg Arbeiter Fraunhofer Institute for Manufacturing Engineering

More information

Mapping Contoured Terrain Using SLAM with a Radio- Controlled Helicopter Platform. Project Proposal. Cognitive Robotics, Spring 2005

Mapping Contoured Terrain Using SLAM with a Radio- Controlled Helicopter Platform. Project Proposal. Cognitive Robotics, Spring 2005 Mapping Contoured Terrain Using SLAM with a Radio- Controlled Helicopter Platform Project Proposal Cognitive Robotics, Spring 2005 Kaijen Hsiao Henry de Plinval Jason Miller Introduction In the context

More information

Spring 2010: Lecture 9. Ashutosh Saxena. Ashutosh Saxena

Spring 2010: Lecture 9. Ashutosh Saxena. Ashutosh Saxena CS 4758/6758: Robot Learning Spring 2010: Lecture 9 Why planning and control? Video Typical Architecture Planning 0.1 Hz Control 50 Hz Does it apply to all robots and all scenarios? Previous Lecture: Potential

More information

Exploration of Unknown or Partially Known. Prof. Dr. -Ing. G. Farber. the current step. It can be derived from a third camera

Exploration of Unknown or Partially Known. Prof. Dr. -Ing. G. Farber. the current step. It can be derived from a third camera Exploration of Unknown or Partially Known Environments? Darius Burschka, Christof Eberst Institute of Process Control Computers Prof. Dr. -Ing. G. Farber Technische Universitat Munchen 80280 Munich, Germany

More information

A 3-D Scanner Capturing Range and Color for the Robotics Applications

A 3-D Scanner Capturing Range and Color for the Robotics Applications J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,

More information

Reactive Planning on a Collaborative Robot for Industrial Applications

Reactive Planning on a Collaborative Robot for Industrial Applications Reactive Planning on a Collaborative Robot for Industrial Applications Gautier Dumonteil, Guido Manfredi, Michel Devy, Ambroise Confetti, Daniel Sidobre To cite this version: Gautier Dumonteil, Guido Manfredi,

More information

Organized Segmenta.on

Organized Segmenta.on Organized Segmenta.on Alex Trevor, Georgia Ins.tute of Technology PCL TUTORIAL @ICRA 13 Overview Mo.va.on Connected Component Algorithm Planar Segmenta.on & Refinement Euclidean Clustering Timing Results

More information

Sensor Fusion for Human Safety in Industrial Workcells*

Sensor Fusion for Human Safety in Industrial Workcells* Sensor Fusion for Human Safety in Industrial Workcells* Paul Rybski 1, Peter Anderson-Sprecher 1, Daniel Huber 1, Chris Niessl 1, Reid Simmons 1 {prybski,huber,reids}@cs.cmu.edu, {peanders,cniessl}@andrew.cmu.edu

More information

TABLE OF CONTENTS. Page 2 14

TABLE OF CONTENTS. Page 2 14 TABLE OF CONTENTS INTRODUCTION... 3 WARNING SIGNS AND THEIR MEANINGS... 3 1. PRODUCT OVERVIEW... 4 1.1. Basic features and components... 4 1.2. Supply package... 5 1.3. Robot arm specifications... 6 1.4.

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Introduction to Mobile Robotics Techniques for 3D Mapping

Introduction to Mobile Robotics Techniques for 3D Mapping Introduction to Mobile Robotics Techniques for 3D Mapping Wolfram Burgard, Michael Ruhnke, Bastian Steder 1 Why 3D Representations Robots live in the 3D world. 2D maps have been applied successfully for

More information

Aerial Robotic Autonomous Exploration & Mapping in Degraded Visual Environments. Kostas Alexis Autonomous Robots Lab, University of Nevada, Reno

Aerial Robotic Autonomous Exploration & Mapping in Degraded Visual Environments. Kostas Alexis Autonomous Robots Lab, University of Nevada, Reno Aerial Robotic Autonomous Exploration & Mapping in Degraded Visual Environments Kostas Alexis Autonomous Robots Lab, University of Nevada, Reno Motivation Aerial robotic operation in GPS-denied Degraded

More information

REAL-TIME VISUALIZATION OF CRANE LIFTING OPERATION IN VIRTUAL REALITY

REAL-TIME VISUALIZATION OF CRANE LIFTING OPERATION IN VIRTUAL REALITY REAL-TIME VISUALIZATION OF CRANE LIFTING OPERATION IN VIRTUAL REALITY Yihai Fang & Yong K. Cho Georgia Institute of Technology, GA, USA ABSTRACT: In the construction domain, previous efforts in utilizing

More information

INTELLIGENT AUTONOMOUS SYSTEMS LAB

INTELLIGENT AUTONOMOUS SYSTEMS LAB Matteo Munaro 1,3, Alex Horn 2, Randy Illum 2, Jeff Burke 2, and Radu Bogdan Rusu 3 1 IAS-Lab at Department of Information Engineering, University of Padova 2 Center for Research in Engineering, Media

More information

3D Grid Size Optimization of Automatic Space Analysis for Plant Facility Using Point Cloud Data

3D Grid Size Optimization of Automatic Space Analysis for Plant Facility Using Point Cloud Data 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) 3D Grid Size Optimization of Automatic Space Analysis for Plant Facility Using Point Cloud Data Gyu seong Choi a, S.W.

More information

Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots

Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots Jingyu Xiang, Yuichi Tazaki, Tatsuya Suzuki and B. Levedahl Abstract This research develops a new roadmap

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA Engin Burak Anil 1 *, Burcu Akinci 1, and Daniel Huber 2 1 Department of Civil and Environmental

More information

Advanced Robotics Path Planning & Navigation

Advanced Robotics Path Planning & Navigation Advanced Robotics Path Planning & Navigation 1 Agenda Motivation Basic Definitions Configuration Space Global Planning Local Planning Obstacle Avoidance ROS Navigation Stack 2 Literature Choset, Lynch,

More information

Optimally Placing Multiple Kinect Sensors for Workplace Monitoring

Optimally Placing Multiple Kinect Sensors for Workplace Monitoring Nima Rafibakhsht Industrial Engineering Southern Illinois University Edwardsville, IL 62026 nima1367@gmail.com H. Felix Lee Industrial Engineering Southern Illinois University Edwardsville, IL 62026 hflee@siue.edu

More information

Flexible Visual Inspection. IAS-13 Industrial Forum Horizon 2020 Dr. Eng. Stefano Tonello - CEO

Flexible Visual Inspection. IAS-13 Industrial Forum Horizon 2020 Dr. Eng. Stefano Tonello - CEO Flexible Visual Inspection IAS-13 Industrial Forum Horizon 2020 Dr. Eng. Stefano Tonello - CEO IT+Robotics Spin-off of University of Padua founded in 2005 Strong relationship with IAS-LAB (Intelligent

More information

Final Project Report: Mobile Pick and Place

Final Project Report: Mobile Pick and Place Final Project Report: Mobile Pick and Place Xiaoyang Liu (xiaoyan1) Juncheng Zhang (junchen1) Karthik Ramachandran (kramacha) Sumit Saxena (sumits1) Yihao Qian (yihaoq) Adviser: Dr Matthew Travers Carnegie

More information

Dynamic update of a virtual cell for programming and safe monitoring of an industrial robot

Dynamic update of a virtual cell for programming and safe monitoring of an industrial robot Dynamic update of a virtual cell for programming and safe monitoring of an industrial robot A. Ferraro M. Indri I. Lazzero Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Duca degli

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Multi-View Stereo for Static and Dynamic Scenes

Multi-View Stereo for Static and Dynamic Scenes Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.

More information

Using Layered Color Precision for a Self-Calibrating Vision System

Using Layered Color Precision for a Self-Calibrating Vision System ROBOCUP2004 SYMPOSIUM, Instituto Superior Técnico, Lisboa, Portugal, July 4-5, 2004. Using Layered Color Precision for a Self-Calibrating Vision System Matthias Jüngel Institut für Informatik, LFG Künstliche

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Solid Modeling. Thomas Funkhouser Princeton University C0S 426, Fall Represent solid interiors of objects

Solid Modeling. Thomas Funkhouser Princeton University C0S 426, Fall Represent solid interiors of objects Solid Modeling Thomas Funkhouser Princeton University C0S 426, Fall 2000 Solid Modeling Represent solid interiors of objects Surface may not be described explicitly Visible Human (National Library of Medicine)

More information

EE631 Cooperating Autonomous Mobile Robots

EE631 Cooperating Autonomous Mobile Robots EE631 Cooperating Autonomous Mobile Robots Lecture: Multi-Robot Motion Planning Prof. Yi Guo ECE Department Plan Introduction Premises and Problem Statement A Multi-Robot Motion Planning Algorithm Implementation

More information

ROBOT SENSORS. 1. Proprioceptors

ROBOT SENSORS. 1. Proprioceptors ROBOT SENSORS Since the action capability is physically interacting with the environment, two types of sensors have to be used in any robotic system: - proprioceptors for the measurement of the robot s

More information

International Journal of Advance Engineering and Research Development

International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.14 International Journal of Advance Engineering and Research Development Volume 3, Issue 3, March -2016 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Research

More information

Mobile Manipulator Design

Mobile Manipulator Design Mobile Manipulator Design December 10, 2007 Reid Simmons, Sanjiv Singh Robotics Institute Carnegie Mellon University 1. Introduction This report provides a preliminary design for two mobile manipulators

More information

Fast Local Planner for Autonomous Helicopter

Fast Local Planner for Autonomous Helicopter Fast Local Planner for Autonomous Helicopter Alexander Washburn talexan@seas.upenn.edu Faculty advisor: Maxim Likhachev April 22, 2008 Abstract: One challenge of autonomous flight is creating a system

More information

IFAS Citrus Initiative Annual Research and Extension Progress Report Mechanical Harvesting and Abscission

IFAS Citrus Initiative Annual Research and Extension Progress Report Mechanical Harvesting and Abscission IFAS Citrus Initiative Annual Research and Extension Progress Report 2006-07 Mechanical Harvesting and Abscission Investigator: Dr. Tom Burks Priority Area: Robotic Harvesting Purpose Statement: The scope

More information