Contract number : Project acronym : SRS-EEU Project title : Multi-Role Shadow Robotic System for Independent Living

Size: px
Start display at page:

Download "Contract number : Project acronym : SRS-EEU Project title : Multi-Role Shadow Robotic System for Independent Living"

Transcription

1 SRS-EEU Multi-Role Shadow Robotic System for Independent Living Enlarged EU Deliverable D4.5.2 Context-aware Virtual 3D Display Final Report Contract number : Project acronym : SRS-EEU Project title : Multi-Role Shadow Robotic System for Independent Living Enlarged EU Deliverable number : Nature : Dissemination level : D4.5.2 R Report PU Public Delivery date : (month 38) Author(s) : Partners contributed : Contact : Michal Spanel, Zdenek Materna, Vit Stancl, Tomas Lokaj, Pavel Smrz, Pavel Zemcik, Marcus Mast BUT, HDM spanel@fit.vutbr.cz The SRS-EEU project was funded by the European Commission under the 7 th Framework Programme (FP7) Challenges 7: Independent living, inclusion and Governance FP7 ICT Contract No February April 2013 Page 1 of 47

2 DOCUMENT HISTORY Version Author(s) Date Changes V1 Michal Spanel 23 rd January 2013 First draft version V2 Michal Spanel, Vit Stancl, Document structure March 2013 Zdenek Materna updated, description of new parts added V3 Michal Spanel 30 th March 2013 Second draft version V4 Michal Spanel, Marcus Mast 7th June 2013 Summary of UI_PRO user tests result added FP7 ICT Contract No February April 2013 Page 2 of 47

3 EXECUTIVE SUMMARY SRS (Multi-Role Shadow Robotic System for Independent Living) focuses on the development and prototyping of remotely-controlled, semi-autonomous robotic solutions in domestic environments to support elderly people. SRS solutions are designed to enable a robot to act as a shadow of its controller. For example, elderly parents can have a robot as a shadow of their children or carers. In this case, adult children or carers can help them remotely and physically with daily living tasks as if the children or carers were resident in the house. Remote presence via robotics is the key to achieve targeted SRS goal. The SRS-EEU extension of the SRS project focuses on multi-modal HRI support aimed at usability, safety and situation awareness for remote users. The task T4.7 (WP4) aims at the development and integration of a novel Context Aware Virtual 3D Display that is able to cope with the project visualisation needs resulting from the existing requirement specifications over the thin links and possibly unreliable connection. The work has produced Virtual Display on the client side based on predefined 3D maps. The Virtual map is updated online employing the combination of real-time 3D robot perception, real-time 2D video processing, and remote operator perception. Deliverable D4.5.2 (M39) comprises full report on specification and performance of the developed software components. FP7 ICT Contract No February April 2013 Page 3 of 47

4 TABLE OF CONTENTS 1 Introduction Virtual 3D Display GUI primitives for HRI Stereoscopic visualization Assisted arm navigation Assisted Grasping Space Navigator Assisted detection Dynamic environment model and 3D mapping Related work Environment model Discussion Virtual 3D Display in RViz RViz plugins for assisted arm manipulation and grasping D environment mapping plugins Stereoscopic visualization UI_PRO user tests Prerequisities Documentation of Packages GUI primitives for HRI Assisted arm manipulation and trajectory planning Assisted grasping Dynamic environment model References FP7 ICT Contract No February April 2013 Page 4 of 47

5 1 INTRODUCTION Based on the study of SRS project requirements and principles of user interaction with remotely controlled robots, a detailed proposal of technical solution has been prepared. Key components of the designed solution are dynamic environment model that fuses different sources of data (point-clouds from Kinect RGB-D sensor, results of object detection and environment perception, results of trajectory prediction, etc.) and builds a more complex model of the environment 3D map; proposal of principles how to visualize the environment putting all the data sources together and proposal of new HRI (Human-Robot Interaction) patterns and corresponding graphic primitives; real-time 3D visualization of the environment, so called Virtual 3D Display, that will be able to cope with the needs of the SRS project; new assisted arm manipulation module that allows remote operator to plan and execute arm trajectory manually; and new assisted grasping module for grasping of unknown objects. The proposed 3D display is an extension to the current SRS concept that increases its usability. A user benefits from improved field of view by means of exocentric camera and 3D visualization of the environment inside RViz; stereoscopic visualization of the 3D environment using NVidia technology based on shutter glasses; single view of the environment fusing different kinds of data (point-clouds, laser scans, images from camera, results of the object detection, etc.); visualization of distance indicators, trajectories, etc. using the newly defined HRI graphic primitives; ability to adjust position of the robot and the robot arm manually using in-scene primitives; ability to use the Space Navigator device (3D mouse) to move the robot or adjust position of the robot s gripper. In order to prototype the system, BUT develops the display as a new part of the existing ROS utility called Rviz. The new Virtual 3D Display will be a part of the UI_PRO interface. FP7 ICT Contract No February April 2013 Page 5 of 47

6 2 VIRTUAL 3D DISPLAY General result of the development is the concept of the simplified real-time 3D visualization of the environment, the dynamic environment model that fuses different sources of data (point-clouds, video stream, etc.) and built a more complex model (i.e. 3D map) of the environment, principles of how to visualize the environment putting all the data together, proposal of basic interaction patterns working with the display; advanced visualization and interaction techniques based on the NVidia stereoscopic technology and the Space Navigator (i.e. 3D mouse); and a functional concept of the display integrated into the existing SRS interfaces. Required features of the display can be summarized as follows: nearly exocentric view + ability to adjust the camera, single view of the 3D scene (point-clouds, laser scans, etc.), visualization of how the robot perceives the environment: simplified avatars of detected objects and output from the human sensing module (i.e. icons, textured billboards, bounding boxes, etc.), virtual 3D map (voxel-based, basic geometry, etc.) built in runtime, visualization of the robot itself (URDF model), ability to load predefined pre-recorded 3D map of the environment, special markers illustrating real dimensions of objects and distances (e.g. distance indicator from gripper to next object), stereoscopic visualization of the 3D environment based on NVidia 3D technology, ability to use Space Navigator as the input device instead of the common mouse, visualization of predicted future movement and trajectories if needed, and advanced interaction patterns (e.g. assign jobs by clicking on highlighted objects, place an object model into the scene, etc.). In order to prototype the system and demonstrate its functionality, BUT has developed concept of the display as a new part of the existing ROS/Rviz utility and the display will be a new part of the UI_PRO interface. FP7 ICT Contract No February April 2013 Page 6 of 47

7 Figure 1: Basic scheme of the existing ROS/SRS packages and newly proposed modules. Alternative communication schemes not based on standard ROS communication scheme will be explored and tested. Where appropriate, these alternative communication schemes will be integrated into the UI_PRO interfaces to increase their robustness when running over Wifi network. 2.1 GUI PRIMITIVES FOR HRI To visualize the environment and interact with detected objects, new GUI primitives have been defined. These primitives are based on Interactive Markers [Gos11], an existing part of ROS system. Primitives can be created either manually or by using predefined services that insert newly created primitives into Interactive Marker Server. FP7 ICT Contract No February April 2013 Page 7 of 47

8 Figure 2: Virtual 3D Display based on the developed HRI primitives (left); and detail of distance indicators (right). Most of the following primitives are able to show their own context menu (right click in the interactive mode) and some of them can be grouped together (e.g. bounding box parameter object_name). BILLBOARD Billboard is a simple object created as a plane mesh with texture representing a real world object. The billboard is facing the camera and can illustrate the movement of the represented object. Currently, there are four predefined objects available: chair, table, person and milk. Figure 3: Sketchy visualization of detected objects using the billboard GUI primitive. FP7 ICT Contract No February April 2013 Page 8 of 47

9 Figure 4: The billboard can also illustrate movement of an object, e.g. walking person. PLANE Plane is a primitive that visualizes a simple un-textured plane without any interaction allowed. The plane can be tagged. Figure 5: Tagging a plane in the 3D scene. BOUNDING BOX Bounding Box allows interaction with the selected object like the movement or the rotation. All actions are available and configurable from the menu (right mouse click on the bounding box). Moreover, the bounding box primitive is able to show real object dimensions. FP7 ICT Contract No February April 2013 Page 9 of 47

10 Figure 6: The bounding box primitive in two different modes: the object manipulation and visualization of the real dimensions. UNKNOWN OBJECT Unknown objects may represent obstacles in the environment or manually added objects that weren t detected automatically. The unknown object can be arbitrary rotated, translated and scaled. REGULAR OBJECT Figure 7: An obstacle manually inserted into the scene using the unknown object primitive. Regular object represents a detected or real-world object which has its mesh in an object database. The object can show its bounding box (if specified) and it can be manually rotated, translated and scaled in the scene. Possible pre-grasp positions can also be shown around the regular object. The visualization of pre-grasp positions aids the operator to move the gripper to a correct position for the grasping. FP7 ICT Contract No February April 2013 Page 10 of 47

11 Figure 8: Visualization of detected objects and possible pre-grasp positions which are stored in the SRS object database. The mesh can be specified in a resource file which can be any type of surface meshes supported by Rviz.stl model, Ogre s.mesh version 1.0, or COLLADA.dae format version 1.1. The resource file must be specified using URI-form syntax, see the ROS package resource_retriever [] for detailes, including the package:// specification. PREDICTED ROBOT POSITION For visualization of predicted robot's movement positions, a special primitive has been prepared. Predicted robot positions after 1, 2 and 3 seconds are visualized using interactive markers. VELOCITY LIMITED MARKER Figure: 9 Visualization of the predicted robot position in RViz. In many real-world situations the robot might prevent to move or rotate the platform in some directions because the platform or the arm is very close to either moving or static obstacles. In FP7 ICT Contract No February April 2013 Page 11 of 47

12 these situations it is very frustrating if the remote operator cannot easily decide in which directions the movement is allowed and in which direction the robot cannot be moved. Figure 10: Velocity limited marker shown when the robot cannot move in a particular direction (left) and rotate in place (right). To help the remote operator to decide how he can manually drive the robot while avoiding the obstacles, we have prepared another HRI primitive - velocity limited marker. When the robot is close to an obstacle it automatically reduces its maximum velocity until zero in this particular direction to avoid the collision. This is the standard behaviour of the robot s low level motion interface. The velocity limited marker shows special markers around the robot in the 3D scene to show in which directions the velocity of the robot is limited (see Figure 10). This helps the remote operator to quickly decide what is the problematic obstacle and how to drive the robot. IN-SCENE TELEOP In order to provide an intuitive way to to drive the Care-O-bot directly from Virtual 3D display we have developed a special in-scene marker, COB Interactive Teleop, that is based on ROS Interactive markers. FP7 ICT Contract No February April 2013 Page 12 of 47

13 Figure 11: Driving the robot using the in-scene teleop. Driving forward and backward is done using red arrows. Strafe to the left and to the right is done using green arrows. Rotating is done using the blue circle. You can also drive to specified position by moving with the yellow disc. The in-scene teleop allows to move the robot, rotate the the robot in place and you can also grab a yellow disc in the middle (Figure 11) and the robot will automatically start to follow the disk trying to reach its position. FOV AND LIVE KINECT DATA VISUALIZATION An important question was how to combine the historical data stored in the 3D map of the environment with the live RGB-D data coming from the Kinect device (i.e. colored point cloud). It is obviously important to show the remote operator the latest data and don t obstruct the view with any artifacts stored in the 3D environment map - the previous recordings. Moreover, the resolution of the 3D map is lower than the resolution of the live Kinect data. Even our 3D mapping module is able to filter out an outdated data, it is straightforward to present the data in the maximum available quality. FP7 ICT Contract No February April 2013 Page 13 of 47

14 Figure 12: Visualization of the current FOV (yellow lines) and combination of live Kinect data rendering inside the FOV and historical data from the 3D voxel-based map outside of the field of view. The final solution, tested during UI_PRO user tests in February, uses the information about the position of the robot and its torso to cut out the part of the 3D map inside the current field view and show the live Kinect data there. The maximum distance from the camera, in which the points are filtered, can be limited because the effective range of the Kinect sensor is limited too. To make clear the difference between the live and the historical data, the current field of view of the Kinect sensor is visualized using two thin lines so they don t obstruct the view. 2.2 STEREOSCOPIC VISUALIZATION Stereoscopic visualization strongly increases user experience and feeling of "being in". Also it simplifies some common tasks that depends on (the best possible) the operator orientation in space - for example grasping, robot navigation in the room, robot arm position, scene objects mutual position and orientation, obstacle avoiding, better and faster helper geometry use without any need to use more scene views and of course much better distances perception. All these tasks are very simplified with the third dimension added. Without using the stereo visualization the operator often needs to manipulate with view direction to see the scene from more angles. Visualization of the robot is already solved in ROS by RViz program that allows us to represent all the required elements like point clouds and user defined geometry. An easy way to display the stereo visualization is to adapt this program to take advantage of some of the available and used solutions. The scene in RViz is made using Ogre library, which, however, in the used version (1.7.3) FP7 ICT Contract No February April 2013 Page 14 of 47

15 is not ready for any use of the stereoscopic display in the Linux operating system. Thus it was necessary to modify the Ogre library as well as RViz itself. There are several commercial solutions for the stereo display in computer graphics. We are using NVidia 3D vision technology to achieve the stereoscopic effect in the RViz. The NVidia 3D vision is a stereoscopic kit from NVidia. This kit consists of LC shutter glasses and driver software. The glasses use wireless (IR) protocol to communicate with emitter connected to the USB port. This emitter provides timing signal. The stereo driver software performs the stereoscopic conversion by using 3D models submitted by the application and rendering two separate views from two slightly shifted points. Fast stereo LCD monitor (120Hz) shows this two images periodically and shutter glasses controlled by the emitter presents the image intended for the left eye while blocking the right eye's view and vice versa. FP7 ICT Contract No February April 2013 Page 15 of 47

16 Figure 13: The stereoscopic visualization principle. 2.3 ASSISTED ARM NAVIGATION As an alternative to the fully autonomous manipulation operations a new semi-autonomous solution has been designed and developed by BUT as a part of UI_PRO interface. Assisted arm navigation can be used in cases when automated planning of the arm trajectory fails or is not applicable. This alternative solution is based on set of packages and offers complete pipeline for the manipulation tasks consisting of the object detection (see Chapter 2.6), arm trajectory planning and grasping (Chapter 2.3 and 2.4). The arm trajectory planning is based on functionality of the arm_navigation stack (standard ROS stack) and BUT s Environment model. The voxel-based 3D map of the environment is used to provide collision-free arm planning. When needed, the human operator can set a goal position of the arm end effector in the 3D virtual environment. The goal position can be set using interactive markers or more intuitively by a 3D positioning device Space Navigator. While adjusting the virtual end effector position the real manipulator does not move. Interface indicates if the desired position is reachable by the arm and if there are no collisions with the environment model or object representations. A collision-free trajectory from a start position to a goal one is planned automatically. Before executing the planned motion on the robot, the operator can run its visualization several times and decide if the motion plan is safe. FP7 ICT Contract No February April 2013 Page 16 of 47

17 Figure 14: The arm goal position visualization and the planned trajectory animation. The assisted arm navigation has been prepared with generality in mind so, it can be used on any robot supporting the arm_navigation stack. The solution is divided into several ROS packages to separate API definition (messages, services and actions), backend, GUI (RVIZ plugins) and SRS integration. The integration into SRS ecosystem has been developed in the form of SMACH generic states (one generic state for each scenario). These generic states utilize the assisted arm navigation API and provides integration with other components (object DB, grasping etc). There are basically three scenarios when the assisted arm navigation can be used. First is the case when there is a known object but the robot is not able to plan the arm trajectory fully autonomously for some reason (too complex environment for instance). In this case, the operator is asked to select one of the pre-computed pre-grasp positions (by srs_grasping), simulate the movement, execute it and give the control back to the decision making module. Second scenario is the situation when the robot cannot finish some manipulation task and the operator might be asked to move the arm to a safe position. The third scenario is grasping of an object which is not stored in the object database or cannot be automatically detected for instance because of poor lighting or occlusion. In this case, the remote operator is first asked to manually specify rough bounding box of the FP7 ICT Contract No February April 2013 Page 17 of 47

18 object, then to navigate the arm, grasp the object (using assisted grasping module) and finally navigate the arm to put the object on the robot s tray. Figure 15: Example of collision checking against the environment. The goal position cannot be reached because it is in a collision. 2.4 ASSISTED GRASPING Assisted grasping has been developed to allow safe and robust grasping of unknown or unrecognized objects by SDH gripper equipped with tactile sensors. It has a separate API definition (i.e. actionlib interface), code and GUI in form of RVIZ plugin. When calling grasp action there is possibility to specify a target configuration (angles) for all joints of the gripper, the grasp duration and maximum forces for all tactile pads. Then, for each joint, velocities, acceleration and deceleration ramps are automatically calculated in such a way that all the joints will reach the target configuration at the same time. If the measured force on a pad will exceed requested maximal force, movement of a corresponding joint will be stopped. With different target configurations and appropriate forces, a wide range of objects can be grasped - squared, rounded, thin, etc. The assisted grasping package also offers node FP7 ICT Contract No February April 2013 Page 18 of 47

19 for preprocessing of the tactile data by using median and Gaussian filtering with configurable parameters. 2.5 SPACE NAVIGATOR Space Navigator (SN) is a 6 DOF positioning device. It has been integrated into numerous professional interfaces to make specific 3D manipulation tasks more intuitive and faster. In the assisted arm navigation scenarios, it is used to set the goal position and the orientation of the virtual end effector. Control input from the SN is recomputed to make the changes in position of the end effector view-aligned (see Figure 17). All the changes are made in the user perspective (i.e. viewing camera coordinate system). This helps to lower mental load of an operator and leads to very intuitive way of the interaction. Figure 16: Space Navigator - an alternative UI_PRO input device. Moreover, sensitivity of the SN control is non-linear which allows the operator to make very precise changes and at the same time to move the end effector over relatively long distances. In addition a package for driving the robot base using the SN device has been developed too. It also considers position of the observing camera in the 3D virtual environment so the control of the robot is also view-aligned. The remote operator may, in certain situations, decide to switch-off the collision avoidance using the SN button. The second button may be used to switch control to robot perspective mode. FP7 ICT Contract No February April 2013 Page 19 of 47

20 Figure 17: View-aligned arm end effector control using the Space Navigator. 2.6 ASSISTED DETECTION In the assisted arm navigation scenario, when there is an unknown object, it is necessary to have its bounding box which is then considered when planning the arm trajectory. For this reason, solution based on BB estimator (see D4.6.2) has been developed. There is an actionlib interface to call for the user action. An image stream from the robot s camera is shown to the remote operator and he is asked to select a region of interest in the image. Then, the approximate position and size of the 3D bounding box is estimated and the result is placed in the 3D scene. The operator can fine tune this rough estimation using interactive markers or, if the estimation is not proper, he can select a new region of interest in the image. FP7 ICT Contract No February April 2013 Page 20 of 47

21 Figure 18: Selection of ROI containing the target object in an image stream. Figure 19: Adjusting the rough bounding box estimated from the chosen ROI in the 3D virtual scene. FP7 ICT Contract No February April 2013 Page 21 of 47

22 3 DYNAMIC ENVIRONMENT MODEL AND 3D MAPPING In order to plan the robot motion it is required more than just sensor scans of the visible environment. It must be possible to process incoming information, categorize and save it for later use. It is clear that the remembered (and currently invisible) scene will change over time. For planning a longer route from the current position to a place already visited this information is needed. Another specific case is the collision free arm planning that requires to have a very detailed model, or map, of the 3D environment. 3.1 RELATED WORK There are several approaches to the problem of mapping of the environment in which the robot moves. By registering and integrating incoming scans (whether from the Kinect, laser sensor or any other device) to the existing structure as far as attempts not only to assume incoming data but to recognize and set forth what robot sees, where it is and what does it mean for the future motion planning. This represents to build a semantic model of the environment to assign importance and meaning to the incoming data and to conclude some new information. This subject (the assignment of meaning to the detected geometry objects) is investigated in the large number of works with a different approach. Some articles are aimed at recognition of position or locality (if it was already visited [Cum08], or room detection by signs placed [Tap05], or attempt to interpret room purpose from objects found [Vas07]), other to categorization of space (in relation to the types of objects found on a given point [Tor03], and classification using AdaBoost [Moz07]). Figure 20: 3D environment mapping using the OctoMap library. Image adopted from [Wur10]. Our system belongs to the category of low-level mapping. It takes care of processing data coming from the input sensors and their placement in the current model. Gradually, this creates a data set FP7 ICT Contract No February April 2013 Page 22 of 47

23 suitable for further work. The advantage of this approach is the ability to process any piece of data independently on the real-time input. It is thus possible to gradually replace parts of the map by the detected entities and to create higher level semantic model. 3.2 ENVIRONMENT MODEL The environment model serves as an encapsulation of sensor received data, automatically detected objects and geometry primitives and user (i.e. operators) marked entities - all in one place. The environment model will provide services for data mining (for example it gives all obstacles near the robot), information needed for more sophisticated robot or arm navigation in the 3D space and orientation, and also allows data compression for transmission instead of large point-cloud data predefined object shortcuts can be send for recognized objects. The current version of the environment model is built upon OctoMap library [Her11] that implements the octree space partitioning and voxel occupancy system. The OctoMap models environment as a grid of cubic volumes of equal size. The octree structure is used to hierarchically organize this grid. Each node in this octree represents space contained in the cubic volume and this volume is recursively subdivided into eight subvolumes until a preset minimum voxel size is reached. The OctoMap library uses probabilistic volume occupancy estimation to cope with problems associated with input sensor noise. Figure 21: BUT environment model showcase. Heart of the whole system and the most complex part is the octomap module. It stores the incoming data to the octomap structure described before and provides some additional functionality. It can load and save whole data set to the local file. Incoming cloud is filtered for ground plane/non ground part, speckles can be removed and outdated and noise is removed by our modified ray-cast FP7 ICT Contract No February April 2013 Page 23 of 47

24 algorithm. As the complementary, manual octomap modifications can be done box-like part of the map can be marked as free or occupied. Figure 22: The environment model manual edit. The environment model is divided into so called plugins. This allows to extend its structure easily. There is a whole family of plugins available: Simple point cloud plugin. This plugin can operate in two modes as the input plugin which transforms incoming cloud to an internally used representation and in the output mode it scans through the octomap data (on the preset detail level) and publishes map of the environment as a point cloud on the corresponding topic. Limited point cloud plugin. The mentioned limitation lies in the octomap scanning phase. Plugin is subscribed to the RViz camera position topic and it publishes only visible part of the map from the operators perspective. This can be of great utility when using some external device to control the robot and we need only a part of the map visible on screen. Compressed point cloud plugin. Works like the plugin described before but the robot internal camera position is used as a view position so the plugin publishes only differences made to the internal octomap. The plugin can be used in cooperation with the compressed point cloud publisher node. This node collects published partial clouds send over network to the remote operator s PC and combines them into the same octomap model as the environment model has. This heavily reduces transferred data amount because only a fraction of that whole 3D map is usually sent over the network. FP7 ICT Contract No February April 2013 Page 24 of 47

25 Figure 23: Example of the published collision grid used for collision free arm trajectory planning.. Other possibilities of output data formats are covered by the following publishing plugins: Collision map plugin, Collision grid plugin, Collision object plugin, 2D map plugin. Each of these plugin publishes appropriate data type messages and each can use another depth of octomap tree scan. An important property of the environment model is the possibility of saving the currently known and scanned surrounding to the file and ability to restore and further update the whole previously recorded model. Navigation and path finding algorithms can obtain and use the whole data set and in different resolutions. The octomap and the collision map can be locked and manually modified for example to remove some unimportant part which confuses the arm trajectory finding algorithm. Generally, this concept allows to connect and to combine several data inputs in the one collecting channel and to easily use this acquired information to the more high-level planning, visualization and robot motion control. 3.3 DISCUSSION The presented environment model is computationally more demanding part of the whole system. Therefore, it is an important possibility to configure individual processing steps. This allows scaling and load balancing depending on the requirements for the map accuracy and speed of response. FP7 ICT Contract No February April 2013 Page 25 of 47

26 Moreover, it is not necessary (and in fact even desirable) to process each incoming frame. Use of the model is rather for the construction, maintenance and use of stable parts of the scene around the robot. When all standard functions are turned on (input in the form of point cloud, filtering solitary cells in octomap, rapid noise removing from the visible part of the map and output again in the form of point cloud and collision map) approximately 3 frames per second are processed with a load of one processor core in the range of about 50-70%. The most challenging operations are mainly writing to the hierarchical octomap structure, data filtering and point cloud registration to the existing part of the map. FP7 ICT Contract No February April 2013 Page 26 of 47

27 4 VIRTUAL 3D DISPLAY IN RVIZ The prototype of the Virtual 3D Display (extension of the basic UI_PRO functionality) introducing principles of the visualization and interaction patterns related to the Dynamic Environment Model is developed in ROS/RViz [Her11]. Several new loadable plugins are provided able to extend functionality of the standard RViz module: simplified user friendly interface for the manual arm trajectory planning and assisted grasping, controls to adjust behaviour (enable/disable features) of the Dynamic Environment Model, extension displays able to visualize data (point clouds, etc.) according to our specific needs. visualization of described HRI primitives using Interactive Markers, and stereoscopic 3D environment visualization by means of NVidia 3D Vision technology. 4.1 RVIZ PLUGINS FOR ASSISTED ARM MANIPULATION AND GRASPING User interface for the assisted arm navigation consists of the virtual manipulator representation in the 3D scene and of RVIZ plugin. The controls in the plugin are disabled by default. The operator is noticed by a pop-up window and appropriate controls become active when there is some task (action interface was called). Simple GUI allows the operator to start a goal position setting, plan trajectory to the desired position, execute it and stop the execution in an emergency case. The operator may decide to plan more trajectories for one task. When the task is finished, the operator clicks on Task completed button. If the operator will find the assisted detection not precise enough during an attempt to perform manipulation task it is also possible to repeat the detection process. Several additional controls have been developed to help the operator to fulfill tasks. In cases when there is a very complex environment or the 3D model is not precise because of some gaps or noise, the operator can enable Avoid fingers collisions functionality. Virtual gripper will be extended by a slightly bigger cylinder and then, this cylinder will be considered when planning the trajectory. FP7 ICT Contract No February April 2013 Page 27 of 47

28 Figure 24: Artificial cylinder to prevent finger collisions against the environment. To speed up the process, the operator may choose one of predefined goal positions. The predefined positions are of two types: absolute (denoted by A ) and relative ( R ). The absolute position is defined in the robot coordinate frame and can help the operator to move the virtual gripper for instance to the robot s tray faster. Second type of the predefined position is relative to the current virtual gripper pose and can be used when it is necessary to lift an object for instance. There is also an undo functionality which provides configurable number of back steps. If the operator want to keep the current orientation (position) of the virtual gripper, it is possible to press the right (left) button of the Space Navigator and lock it. These locks are indicated by checkboxes in SpaceNavigator locks section. FP7 ICT Contract No February April 2013 Page 28 of 47

29 Figure 25: Assisted arm navigation GUI in different states. When doing the grasping, the operator can select an object category and then press the Grasp button. Categories are predefined in a configuration file. Each category has a name, target configuration of SDH joints and desired maximum forces for all fingers. In the UI, the operator can see names and corresponding maximum forces. After execution of the grasp, the operator can decide if it was successful using tactile data visualization. FP7 ICT Contract No February April 2013 Page 29 of 47

30 Figure 26: Assisted grasping UI in different states D ENVIRONMENT MAPPING PLUGINS Octomap Control Panel (OCP) combines essential features for the interactive control of the environment model. For any direct manipulation of the data an interactive marker object must be inserted into the scene. For this purpose the first set of buttons serves. Button "Add selection box" inserts an element at the position where it was inserted in the previous case (or at the origin of the map coordinate system, if the box has never been inserted before). The "Hide selection box" deletes the interactive element from the scene. FP7 ICT Contract No February April 2013 Page 30 of 47

31 Figure 27: Octomap control panel. After the interactive marker insertion and it s positioning and scaling, the octomap or the collision map can be directly modified. In order to make changes permanent, the relevant part of the dynamic model should be locked so the new data from our sensors are not added into and our modifications are not overwritten. To do this, the check boxes "Pause mapping server" and "Lock collision map" should be used. Then user can either insert data into the octomap (button "Insert points at selected position"), delete octomap area included in the box ("Clear point-in-box ') insert obstacle into the collision map (" Insert obstacle at selected position "), or delete a part of the collision map ("Clear collision map in box"). The last not mentioned button in the 3D environment map group can be used to delete the entire octomap. Figure 28: Near and far clipping planes set to the default distances. FP7 ICT Contract No February April 2013 Page 31 of 47

32 Figure 29: Near clipping plane slightly moved to hide interfering geometry. Camera Control Panel controls one aspect of the Ogre camera in the RViz visualization window - near and far clipping planes. In some cases, the displayed scene geometry interferes with the view of the robot and it is not possible to rotate the scene so that the direct view is achieved and also, for example, the operator cannot see an object that the robot should grasp. In such a situation, it is possible to move the clipping planes so that obstacles disappear. Each individual slider controls the position of one plane. The total visible distance can be set by the value of the spin box, where 100 means 100 percent of the base default value used in RViz. 4.3 STEREOSCOPIC VISUALIZATION As mentioned above, for the stereoscopic display we use the hardware kit developed by NVidia. We used the PNY NVIDIA Quadro 4000 graphics card and Asus VG278H 27 LCD". When properly setup OpenGL scene is displayed alternately for the left and right eye the shutter glasses covers the opposite eye. The Ogre library allows you to use this effect but the library version used in RViz (ROS Electric) allows it only in a Windows environment. There was an unfinished patch ( which was used as a base for our necessary adjustments. It combines several things: It adds new configuration parameters for Ogre which are then evaluated and passed to the OpenGL layer. The graphics initialization function modifications. It adds new camera parameters which are possible to be changed to achieve better 3D effect. The method that is suspended in the rendering loop that turns the position of the camera according to the set positions of the right and left eye. In addition to the Ogre patching (this was finally solved using the patches automatically applied when compiling the stack), it was necessary to modify the RViz itself. A parameter was added that can be used to turn the stereo mode on or off when you start the program. Application initialization method was changed according to the modified library. Furthermore, a support for user set parameters (position of the eyes, focal length) was added. FP7 ICT Contract No February April 2013 Page 32 of 47

33 The resulting stereoscopic effect actually brings the expected improvement of the perception of the surrounding space and makes it easier to control the robot. It significantly reduces the need to constantly turn the scene and look from different angles, so that the operator has received a more complete idea of the layout of the room and the location of the objects to be manipulated. FP7 ICT Contract No February April 2013 Page 33 of 47

34 5 UI_PRO USER TESTS To validate usefulness of the developed solutions, extensive user tests have been performed. The user tests consisted of two experiments, which took place in February and March 2013 where 55 participants were asked to pass prepared tasks within two weeks of intensive testing. The user tests were organized by HDM and prepared mainly in cooperation between the SRS partners HDM, BUT, and IPA. The experiments were designed to obtain validated interaction patterns, i.e. solutions of good interaction design practice, from the conceptual user interface work and user interface features implemented by SRS partners. By applying criteria such as innovativeness and scientific relevance, we chose two themes for investigation among the interaction patterns. The two themes, availability of 3D environment models for remote semi-autonomous navigation and availability of stereoscopic presentation during remote semi-autonomous manipulation and navigation, were investigated in the two experiments under controlled conditions to obtain validated human-robot interaction patterns for the semi-autonomous remote operation of domestic service robots. Experiment 1 aimed to expose advantages and disadvantages of different 3D environment models. Users tried to solve navigation tasks in a home-like environment, unsolvable for the existing autonomous system, using voxel-based 3D map (BUT s Environment Model) or using geometric map (IPA s plane detection). During the navigation tests, users controlled the robot using the SpaceNavigator device. The second experiment was designed to discover potential advantages of the stereoscopic user interface (better perception of the depth, higher precision, etc.) for the navigation and manipulation tasks. The navigation task was similar to experiment 1. Manipulation tasks were based on the assisted arm navigation approach where users were asked to perform pick-and-place tasks in cluttered scenes. The whole testing site has been modelled with very high accuracy in the Gazebo simulator (Figure 30). The simulated environment is based on CAD models, it uses realistic textures and models all objects present in the real environment. This enabled us to carry out extensive pre-tests and improve the user interface before starting tests with the real robot. FP7 ICT Contract No February April 2013 Page 34 of 47

35 Figure 30: Overview of testing environment (modelled in 3D for pre-tests) 5.1 EXPERIMENTAL CONDITIONS In experiment 1, there were three experimental conditions (three different types of the environment visualization), as depicted in Figure 31. There were 9 participants in each experimental condition. In experiment 2, there were two experimental conditions (monoscopic and stereoscopic presentation) with 14 participants in each condition. FP7 ICT Contract No February April 2013 Page 35 of 47

36 Figure 31: Experimental conditions: Top row shows the three types of environment representation used in Experiment 1 (voxel-based 3D map, geometric 3D map based on plane detection, and control condition without 3D environment model); bottom row shows participant with stereo glasses during Experiment 2. In each of the experimental conditions, participants carried out several tasks, as depicted in Figure 32. The main metric we measured was the time to complete the task. Further, participants filled in a questionnaire after each task with questions on situation awareness and telepresence. FP7 ICT Contract No February April 2013 Page 36 of 47

37 Experiment 1 living room navigation task Experiment 1 corridor navigation task Experiment 1 bedroom search task Experiment 2 grasping of apple juice Experiment 2 grasping of book Experiment 2 grasping of book Figure 32: Examples of tasks users accomplished through the remote user interface. A new SRS package to support all these tests has been developed (srs_user_tests). It contains all necessary configuration files and makes launching of particular test under specific condition easy with just two commands: one for robot and one for operator s PC. These commands also start logging of specified data for subsequent analysis. FP7 ICT Contract No February April 2013 Page 37 of 47

38 5.2 SUMMARY OF RESULTS A more detailed description of the experiments and results can be found in deliverables D2.2.2 and D6.2. This section briefly summarizes the most important results of the study. In both experiments, the most important metric was the time it took for a user to complete the task. In the living room navigation task, we also counted the number of object collisions. Overall, users were able to complete manipulation and navigation tasks in 100% of the cases, meaning that the user interface was highly effective at accomplishing what it was designed for. The task success rate of 100% is a further indication for the appropriateness of this user interface. The stereoscopic condition showed advantages between 10% and 30% (depending on the task) in task completion time. Results on the availability of 3D maps showed that the 3D map conditions performed from equally well (in one of the navigation tasks) to more than 100% better in task completion time, compared to the control condition without a 3D map. Results on the system s overall quality, as measured by the AttrakDiff instrument (Figure 33), are encouraging and show a vast improvement over the previous study in Milan where participants used an early version of the UI-PRO user interface. All average ratings are well above neutral with some reaching close to maximum values. This indicates that users find the system very attractive and adequate. Figure 33: Comparison of AttrakDiff ratings of system s attractiveness between 2012 Milan study (top) and 2013 Stuttgart study (bottom); PQ is usability; HQ are hedonic qualities (e.g., fun to use); ATT is overall attractiveness FP7 ICT Contract No February April 2013 Page 38 of 47

39 Only few usability issues and other user interface issues were found. The area with most issues was positioning and resizing of the bounding box in the manipulation tasks. As the arrows for resizing the box were quite small, some users had trouble hitting them with the mouse, especially using the stereoscopic user interface but also in a condition with monoscopic presentation. Also, these arrows were sometimes covered by voxels of the scene. The rotation wheel of the bounding box in some cases caused similar problems as it was partially covered by the box itself unless the box was resized first. It was also mentioned that the resolution of the voxel map could be higher for better clarity. Due to the rather large environment, resolution had to be reduced for performance reasons. A minority of users seemed to have trouble using SpaceNavigator despite the training. We assume that additional training would help resolving this issue. On the navigation task, some users thought that the robot moved too slowly or didn t properly react to their interaction with Space Navigator. This was due to the thresholds set for collision avoidance and due to the passage to navigate through being very narrow (the robot slows down when close to obstacles). The issues found can be fairly easily remedied in the next version of the user interface. FP7 ICT Contract No February April 2013 Page 39 of 47

40 6 PREREQUISITIES The core prerequisites for the software to be used are: Linux OS (developed on Ubuntu 11.10), Robot Operating System (ROS) (developed on Electric version), Care-O-bot stacks installed in ROS, Stacks for COB simulation in robot simulator Gazebo, srs_public stack. MANUAL ARM MANIPULATION AND PLANNING core prerequisites additional stacks: cob_manipulation, arm_navigation, warehousewg, joystick_drivers DYNAMIC ENVIRONMENT MODULE Octomap Library ( Ogre library - stacks VIRTUAL 3D DISPLAY RViz utility and Interactive Markers installed in ROS [Gos11,Jon11]. The software components are property of Brno University of Technology. Most of the components are be available under LGPL open source license, or a license for academic/research purposes can be granted to any prospective user. FP7 ICT Contract No February April 2013 Page 40 of 47

41 7 DOCUMENTATION OF PACKAGES The proposed software components are mostly realized as ROS nodes/services in C++ or Python programming language. This section briefly describes used interfaces of newly created ROS nodes and their integration with shadow robotic system that is developed within the SRS project. 7.1 GUI PRIMITIVES FOR HRI All GUI primitives are implemented as Interactive Markers [Gos11]. All necessary ROS services, own interactive marker server called but_gui_service_server and relevant C++ source files can be found in packages srs_interaction_primitives, srs_ui_but, cob_interactive_teleop and cob_velocity_filter. Predefined ROS services can be used to add, modify or remove GUI primitives. USAGE: First you have to run the server: roslaunch srs_interaction_primitives interaction_primitives.launch LIST OF AVAILABLE SERVICES: /interaction_primitives/add_bounding_box /interaction_primitives/add_billboard /interaction_primitives/add_plane /interaction_primitives/add_plane_polygon /interaction_primitives/add_object /interaction_primitives/add_unknown_object /interaction_primitives/remove_primitive /interaction_primitives/change_pose /interaction_primitives/change_scale /interaction_primitives/change_color /interaction_primitives/change_description /interaction_primitives/get_update_topic /interaction_primitives/change_direction FP7 ICT Contract No February April 2013 Page 41 of 47

DELIVERABLE D 5.1 Name of the Deliverable SRS UI_PRI, UI_LOC and core system implemented

DELIVERABLE D 5.1 Name of the Deliverable SRS UI_PRI, UI_LOC and core system implemented SRS Multi-Role Shadow Robotic System for Independent Living Multi-Role Small or medium Shadow scale Robotic focused System research for Independent project (STREP) Living DELIVERABLE D 5.1 Name of the

More information

Preface...vii. Printed vs PDF Versions of the Book...ix. 1. Scope of this Volume Installing the ros-by-example Code...3

Preface...vii. Printed vs PDF Versions of the Book...ix. 1. Scope of this Volume Installing the ros-by-example Code...3 Contents Preface...vii Printed vs PDF Versions of the Book...ix 1. Scope of this Volume...1 2. Installing the ros-by-example Code...3 3. Task Execution using ROS...7 3.1 A Fake Battery Simulator...8 3.2

More information

3D Collision Avoidance for Navigation in Unstructured Environments

3D Collision Avoidance for Navigation in Unstructured Environments 3D Collision Avoidance for Navigation in Unstructured Environments Armin Hornung Humanoid Robots Lab, University of Freiburg May 5, 2011 Motivation Personal robots are to operate in arbitrary complex environments:

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

Semantic Mapping and Reasoning Approach for Mobile Robotics

Semantic Mapping and Reasoning Approach for Mobile Robotics Semantic Mapping and Reasoning Approach for Mobile Robotics Caner GUNEY, Serdar Bora SAYIN, Murat KENDİR, Turkey Key words: Semantic mapping, 3D mapping, probabilistic, robotic surveying, mine surveying

More information

BIN PICKING APPLICATIONS AND TECHNOLOGIES

BIN PICKING APPLICATIONS AND TECHNOLOGIES BIN PICKING APPLICATIONS AND TECHNOLOGIES TABLE OF CONTENTS INTRODUCTION... 3 TYPES OF MATERIAL HANDLING... 3 WHOLE BIN PICKING PROCESS... 4 VISION SYSTEM: HARDWARE... 4 VISION SYSTEM: SOFTWARE... 5 END

More information

Keywords: clustering, construction, machine vision

Keywords: clustering, construction, machine vision CS4758: Robot Construction Worker Alycia Gailey, biomedical engineering, graduate student: asg47@cornell.edu Alex Slover, computer science, junior: ais46@cornell.edu Abstract: Progress has been made in

More information

John Hsu Nate Koenig ROSCon 2012

John Hsu Nate Koenig ROSCon 2012 John Hsu Nate Koenig ROSCon 2012 Outline What is Gazebo, and why should you use it Overview and architecture Environment modeling Robot modeling Interfaces Getting Help Simulation for Robots Towards accurate

More information

Intern Presentation:

Intern Presentation: : Gripper Stereo and Assisted Teleoperation Stanford University December 13, 2010 Outline 1. Introduction 2. Hardware 3. Research 4. Packages 5. Conclusion Introduction Hardware Research Packages Conclusion

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Generating and Learning from 3D Models of Objects through Interactions. by Kiana Alcala, Kathryn Baldauf, Aylish Wrench

Generating and Learning from 3D Models of Objects through Interactions. by Kiana Alcala, Kathryn Baldauf, Aylish Wrench Generating and Learning from 3D Models of Objects through Interactions by Kiana Alcala, Kathryn Baldauf, Aylish Wrench Abstract For our project, we plan to implement code on the robotic arm that will allow

More information

Tangents. In this tutorial we are going to take a look at how tangents can affect an animation.

Tangents. In this tutorial we are going to take a look at how tangents can affect an animation. Tangents In this tutorial we are going to take a look at how tangents can affect an animation. One of the 12 Principles of Animation is called Slow In and Slow Out. This refers to the spacing of the in

More information

Set up and Foundation of the Husky

Set up and Foundation of the Husky Set up and Foundation of the Husky Marisa Warner, Jeremy Gottlieb, Gabriel Elkaim Worcester Polytechnic Institute University of California, Santa Cruz Abstract Clearpath s Husky A200 is an unmanned ground

More information

Final Project Report: Mobile Pick and Place

Final Project Report: Mobile Pick and Place Final Project Report: Mobile Pick and Place Xiaoyang Liu (xiaoyan1) Juncheng Zhang (junchen1) Karthik Ramachandran (kramacha) Sumit Saxena (sumits1) Yihao Qian (yihaoq) Adviser: Dr Matthew Travers Carnegie

More information

An Interactive Technique for Robot Control by Using Image Processing Method

An Interactive Technique for Robot Control by Using Image Processing Method An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.

More information

Getting Started with ShowcaseChapter1:

Getting Started with ShowcaseChapter1: Chapter 1 Getting Started with ShowcaseChapter1: In this chapter, you learn the purpose of Autodesk Showcase, about its interface, and how to import geometry and adjust imported geometry. Objectives After

More information

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

Real Time Motion Detection Using Background Subtraction Method and Frame Difference Real Time Motion Detection Using Background Subtraction Method and Frame Difference Lavanya M P PG Scholar, Department of ECE, Channabasaveshwara Institute of Technology, Gubbi, Tumkur Abstract: In today

More information

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of

More information

INF Introduction to Robot Operating System

INF Introduction to Robot Operating System INF3480 - Introduction to Robot Operating System February 22, 2017 Justinas Mišeikis Side Note This is an overview lecture, but do expect exam question on ROS topic also. Please pay more attention to the

More information

#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS

#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS #65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS Final Research Report Luis E. Navarro-Serment, Ph.D. The Robotics Institute Carnegie Mellon University Disclaimer The contents

More information

This lesson introduces Blender, covering the tools and concepts necessary to set up a minimal scene in virtual 3D space.

This lesson introduces Blender, covering the tools and concepts necessary to set up a minimal scene in virtual 3D space. 3D Modeling with Blender: 01. Blender Basics Overview This lesson introduces Blender, covering the tools and concepts necessary to set up a minimal scene in virtual 3D space. Concepts Covered Blender s

More information

begins halting unexpectedly, doing one or more of the following may improve performance;

begins halting unexpectedly, doing one or more of the following may improve performance; CLEARPATH ROBOTICS F r o m T h e D e s k o f T h e R o b o t s m i t h s Thank you for your Husky A200 order! As part of the integration, we have prepared this quick reference sheet for you and your team

More information

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 1 The Blender Interface and Basic Shapes

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 1 The Blender Interface and Basic Shapes Blender Notes Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 1 The Blender Interface and Basic Shapes Introduction Blender is a powerful modeling, animation and rendering

More information

Super Assembling Arms

Super Assembling Arms Super Assembling Arms Yun Jiang, Nan Xiao, and Hanpin Yan {yj229, nx27, hy95}@cornell.edu Abstract Although there are more and more things personal robots can do for us at home, they are unable to accomplish

More information

A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots

A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots Jurgen Leitner, Simon Harding, Alexander Forster and Peter Corke Presentation: Hana Fusman Introduction/ Overview The goal of their

More information

Stereo Rig Final Report

Stereo Rig Final Report Stereo Rig Final Report Yifei Zhang Abstract The ability to generate 3D images for the underwater environment offers researchers better insights by enabling them to record scenes for future analysis. The

More information

Managing custom montage files Quick montages How custom montage files are applied Markers Adding markers...

Managing custom montage files Quick montages How custom montage files are applied Markers Adding markers... AnyWave Contents What is AnyWave?... 3 AnyWave home directories... 3 Opening a file in AnyWave... 4 Quick re-open a recent file... 4 Viewing the content of a file... 5 Choose what you want to view and

More information

BoA Tools Page 1 / 31

BoA Tools Page 1 / 31 BoA Tools Page 1 / 31 Standard tools Overview 2 Work pane 3 3D-2D file Main palette 6 Layout Main Palette 9 Navigation tools 11 Workplane Palette 14 Cursor Palette 21 Numeric control 24 Selection by Criteria

More information

Field-of-view dependent registration of point clouds and incremental segmentation of table-tops using time-offlight

Field-of-view dependent registration of point clouds and incremental segmentation of table-tops using time-offlight Field-of-view dependent registration of point clouds and incremental segmentation of table-tops using time-offlight cameras Dipl.-Ing. Georg Arbeiter Fraunhofer Institute for Manufacturing Engineering

More information

Ray Tracing Acceleration Data Structures

Ray Tracing Acceleration Data Structures Ray Tracing Acceleration Data Structures Sumair Ahmed October 29, 2009 Ray Tracing is very time-consuming because of the ray-object intersection calculations. With the brute force method, each ray has

More information

MASSIVE TIME-LAPSE POINT CLOUD RENDERING with VR

MASSIVE TIME-LAPSE POINT CLOUD RENDERING with VR April 4-7, 2016 Silicon Valley MASSIVE TIME-LAPSE POINT CLOUD RENDERING with VR Innfarn Yoo, OpenGL Chips and Core Markus Schuetz, Professional Visualization Introduction Previous Work AGENDA Methods Progressive

More information

Premiere Pro Desktop Layout (NeaseTV 2015 Layout)

Premiere Pro Desktop Layout (NeaseTV 2015 Layout) Premiere Pro 2015 1. Contextually Sensitive Windows - Must be on the correct window in order to do some tasks 2. Contextually Sensitive Menus 3. 1 zillion ways to do something. No 2 people will do everything

More information

COS 116 The Computational Universe Laboratory 10: Computer Graphics

COS 116 The Computational Universe Laboratory 10: Computer Graphics COS 116 The Computational Universe Laboratory 10: Computer Graphics As mentioned in lecture, computer graphics has four major parts: imaging, rendering, modeling, and animation. In this lab you will learn

More information

A 3D Representation of Obstacles in the Robot s Reachable Area Considering Occlusions

A 3D Representation of Obstacles in the Robot s Reachable Area Considering Occlusions A 3D Representation of Obstacles in the Robot s Reachable Area Considering Occlusions Angelika Fetzner, Christian Frese, Christian Frey Fraunhofer Institute of Optronics, System Technologies and Image

More information

Cursor Design Considerations For the Pointer-based Television

Cursor Design Considerations For the Pointer-based Television Hillcrest Labs Design Note Cursor Design Considerations For the Pointer-based Television Designing the cursor for a pointing-based television must consider factors that differ from the implementation of

More information

EE-565-Lab2. Dr. Ahmad Kamal Nasir

EE-565-Lab2. Dr. Ahmad Kamal Nasir EE-565-Lab2 Introduction to Simulation Environment Dr. Ahmad Kamal Nasir 29.01.2016 Dr. -Ing. Ahmad Kamal Nasir 1 Today s Objectives Introduction to Gazebo Building a robot model in Gazebo Populating robot

More information

A Qualitative Analysis of 3D Display Technology

A Qualitative Analysis of 3D Display Technology A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract

More information

Table of Contents. Introduction 1. Software installation 2. Remote control and video transmission 3. Navigation 4. FAQ 5.

Table of Contents. Introduction 1. Software installation 2. Remote control and video transmission 3. Navigation 4. FAQ 5. Table of Contents Introduction 1. Software installation 2. Remote control and video transmission 3. Navigation 4. FAQ 5. Maintenance 1.1 1.2 1.3 1.4 1.5 1.6 2 Introduction Introduction Introduction The

More information

Autodesk Fusion 360: Render. Overview

Autodesk Fusion 360: Render. Overview Overview Rendering is the process of generating an image by combining geometry, camera, texture, lighting and shading (also called materials) information using a computer program. Before an image can be

More information

About this document. Introduction. Where does Life Forms fit? Prev Menu Next Back p. 2

About this document. Introduction. Where does Life Forms fit? Prev Menu Next Back p. 2 Prev Menu Next Back p. 2 About this document This document explains how to use Life Forms Studio with LightWave 5.5-6.5. It also contains short examples of how to use LightWave and Life Forms together.

More information

Transforming Objects and Components

Transforming Objects and Components 4 Transforming Objects and Components Arrow selection Lasso selection Paint selection Move Rotate Scale Universal Manipulator Soft Modification Show Manipulator Last tool used Figure 4.1 Maya s manipulation

More information

CS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz

CS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz CS 563 Advanced Topics in Computer Graphics QSplat by Matt Maziarz Outline Previous work in area Background Overview In-depth look File structure Performance Future Point Rendering To save on setup and

More information

Department of Physics & Astronomy Lab Manual Undergraduate Labs. A Guide to Logger Pro

Department of Physics & Astronomy Lab Manual Undergraduate Labs. A Guide to Logger Pro A Guide to Logger Pro Logger Pro is the main program used in our physics labs for data collection and analysis. You are encouraged to download Logger Pro to your personal laptop and bring it with you to

More information

Visual Perception for Robots

Visual Perception for Robots Visual Perception for Robots Sven Behnke Computer Science Institute VI Autonomous Intelligent Systems Our Cognitive Robots Complete systems for example scenarios Equipped with rich sensors Flying robot

More information

BSc Computing Year 3 Graphics Programming 3D Maze Room Assignment Two. by Richard M. Mann:

BSc Computing Year 3 Graphics Programming 3D Maze Room Assignment Two. by Richard M. Mann: BSc Computing Year 3 Graphics Programming 3D Maze Room Assignment Two by Richard M. Mann: 20032144 April 2003 Table of Contents 1 INTRODUCTION...4 2 ANALYSIS & DESIGN...5 2.1 ROOM DESIGN... 5 2.1.1 Dimensions...5

More information

Sung-Eui Yoon ( 윤성의 )

Sung-Eui Yoon ( 윤성의 ) CS380: Computer Graphics Ray Tracing Sung-Eui Yoon ( 윤성의 ) Course URL: http://sglab.kaist.ac.kr/~sungeui/cg/ Class Objectives Understand overall algorithm of recursive ray tracing Ray generations Intersection

More information

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers

More information

the NXT-G programming environment

the NXT-G programming environment 2 the NXT-G programming environment This chapter takes a close look at the NXT-G programming environment and presents a few simple programs. The NXT-G programming environment is fairly complex, with lots

More information

This is the opening view of blender.

This is the opening view of blender. This is the opening view of blender. Note that interacting with Blender is a little different from other programs that you may be used to. For example, left clicking won t select objects on the scene,

More information

Bumblebee2 Stereo Vision Camera

Bumblebee2 Stereo Vision Camera Bumblebee2 Stereo Vision Camera Description We use the Point Grey Bumblebee2 Stereo Vision Camera in this lab section. This stereo camera can capture 648 x 488 video at 48 FPS. 1) Microlenses 2) Status

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Visual Physics Introductory Lab [Lab 0]

Visual Physics Introductory Lab [Lab 0] Your Introductory Lab will guide you through the steps necessary to utilize state-of-the-art technology to acquire and graph data of mechanics experiments. Throughout Visual Physics, you will be using

More information

Active Recognition and Manipulation of Simple Parts Exploiting 3D Information

Active Recognition and Manipulation of Simple Parts Exploiting 3D Information experiment ActReMa Active Recognition and Manipulation of Simple Parts Exploiting 3D Information Experiment Partners: Rheinische Friedrich-Wilhelms-Universität Bonn Metronom Automation GmbH Experiment

More information

1 Interface Fundamentals

1 Interface Fundamentals 1 Interface Fundamentals Windows The Media Composer interface is focused on three primary windows: the Composer, the Timeline and the Project. The Composer window contains the source and record monitors

More information

Basic features. Adding audio files and tracks

Basic features. Adding audio files and tracks Audio in Pictures to Exe Introduction In the past the conventional wisdom was that you needed a separate audio editing program to produce the soundtrack for an AV sequence. However I believe that PTE (Pictures

More information

Handheld Augmented Reality. Reto Lindegger

Handheld Augmented Reality. Reto Lindegger Handheld Augmented Reality Reto Lindegger lreto@ethz.ch 1 AUGMENTED REALITY 2 A Definition Three important characteristics: Combines real and virtual environment Interactive in real-time Registered in

More information

Bridging the Paper and Electronic Worlds

Bridging the Paper and Electronic Worlds Bridging the Paper and Electronic Worlds Johnson, Jellinek, Klotz, Card. Aaron Zinman MAS.961 What its about Paper exists Its useful and persistent Xerox is concerned with doc management Scanning is problematic

More information

v Overview SMS Tutorials Prerequisites Requirements Time Objectives

v Overview SMS Tutorials Prerequisites Requirements Time Objectives v. 12.2 SMS 12.2 Tutorial Overview Objectives This tutorial describes the major components of the SMS interface and gives a brief introduction to the different SMS modules. Ideally, this tutorial should

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

HOUR 12. Adding a Chart

HOUR 12. Adding a Chart HOUR 12 Adding a Chart The highlights of this hour are as follows: Reasons for using a chart The chart elements The chart types How to create charts with the Chart Wizard How to work with charts How to

More information

MATLAB Based Interactive Music Player using XBOX Kinect

MATLAB Based Interactive Music Player using XBOX Kinect 1 MATLAB Based Interactive Music Player using XBOX Kinect EN.600.461 Final Project MATLAB Based Interactive Music Player using XBOX Kinect Gowtham G. Piyush R. Ashish K. (ggarime1, proutra1, akumar34)@jhu.edu

More information

User s guide. November LSE S.r.l. All rights reserved

User s guide. November LSE S.r.l. All rights reserved User s guide November 2015 2015 LSE S.r.l. All rights reserved WARNING In writing this manual every care has been taken to offer the most updated, correct and clear information possible; however unwanted

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

CARE-O-BOT-RESEARCH: PROVIDING ROBUST ROBOTICS HARDWARE TO AN OPEN SOURCE COMMUNITY

CARE-O-BOT-RESEARCH: PROVIDING ROBUST ROBOTICS HARDWARE TO AN OPEN SOURCE COMMUNITY CARE-O-BOT-RESEARCH: PROVIDING ROBUST ROBOTICS HARDWARE TO AN OPEN SOURCE COMMUNITY Dipl.-Ing. Florian Weißhardt Fraunhofer Institute for Manufacturing Engineering and Automation IPA Outline Objective

More information

AUTODESK FUSION 360 Designing a RC Car Body

AUTODESK FUSION 360 Designing a RC Car Body AUTODESK FUSION 360 Designing a RC Car Body Abstract This project explores how to use the sculpting tools available in Autodesk Fusion 360 Ultimate to design the body of a RC car. John Helfen john.helfen@autodesk.com

More information

Illumination and Geometry Techniques. Karljohan Lundin Palmerius

Illumination and Geometry Techniques. Karljohan Lundin Palmerius Illumination and Geometry Techniques Karljohan Lundin Palmerius Objectives Complex geometries Translucency Huge areas Really nice graphics! Shadows Graceful degradation Acceleration Optimization Straightforward

More information

Blender Animation Editors

Blender Animation Editors Blender Animation Editors Animation Editors Posted on September 8, 2010 by mrsiefker Blender has several different editors for creating and fine tuning our animations. Each one is built around a specific

More information

WeDo Mars Rover. YOUR CHALLENGE: Working with a partner, collect rock and soil samples from the Martian crust using your Mars Rover Robot.

WeDo Mars Rover. YOUR CHALLENGE: Working with a partner, collect rock and soil samples from the Martian crust using your Mars Rover Robot. WeDo Mars Rover WHAT: The Lego WeDo is a robotics kit that contains a motor, sensors, and a variety of Lego parts that can construct robots and kinetic sculptures. Program your WeDo creation using the

More information

Course Review. Computer Animation and Visualisation. Taku Komura

Course Review. Computer Animation and Visualisation. Taku Komura Course Review Computer Animation and Visualisation Taku Komura Characters include Human models Virtual characters Animal models Representation of postures The body has a hierarchical structure Many types

More information

UI Elements. If you are not working in 2D mode, you need to change the texture type to Sprite (2D and UI)

UI Elements. If you are not working in 2D mode, you need to change the texture type to Sprite (2D and UI) UI Elements 1 2D Sprites If you are not working in 2D mode, you need to change the texture type to Sprite (2D and UI) Change Sprite Mode based on how many images are contained in your texture If you are

More information

SLAMWARE. RoboStudio. User Manual. Shanghai Slamtec.Co.,Ltd rev.1.1

SLAMWARE. RoboStudio. User Manual. Shanghai Slamtec.Co.,Ltd rev.1.1 www.slamtec.com 2017-11-06 rev.1.1 SLAMWARE RoboStudio User Manual Shanghai Slamtec.Co.,Ltd Contents CONTENTS... 1 INTRODUCTION... 3 USER GUIDE... 4 OFFLINE/ONLINE MODE... 4 CONNECT/DISCONNECT ROBOT...

More information

2 SELECTING AND ALIGNING

2 SELECTING AND ALIGNING 2 SELECTING AND ALIGNING Lesson overview In this lesson, you ll learn how to do the following: Differentiate between the various selection tools and employ different selection techniques. Recognize Smart

More information

Real time 3D reconstruction of non-structured domestic environment for obstacle avoidance using multiple RGB-D cameras

Real time 3D reconstruction of non-structured domestic environment for obstacle avoidance using multiple RGB-D cameras Maig 2015 Màster universitari en Automàtica i Robòtica Jordi Magdaleno Maltas Treball de Fi de Màster Màster universitari en Automàtica i Robòtica Real time 3D reconstruction of non-structured domestic

More information

S7316: Real-Time Robotics Control and Simulation for Deformable Terrain Applications Using the GPU

S7316: Real-Time Robotics Control and Simulation for Deformable Terrain Applications Using the GPU S7316: Real-Time Robotics Control and Simulation for Deformable Terrain Applications Using the GPU Daniel Melanz Copyright 2017 Energid Technology Overview 1. Who are we? 2. What do we do? 3. How do we

More information

5 Subdivision Surfaces

5 Subdivision Surfaces 5 Subdivision Surfaces In Maya, subdivision surfaces possess characteristics of both polygon and NURBS surface types. This hybrid surface type offers some features not offered by the other surface types.

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Guide to Parallel Operating Systems with Windows 7 and Linux

Guide to Parallel Operating Systems with Windows 7 and Linux Guide to Parallel Operating Systems with Windows 7 and Linux Chapter 3 Using the Graphical User Interface Objectives Use the Start menu and Applications menu Tailor the desktop Access data on your computer

More information

COMBINING TASK AND MOTION PLANNING FOR COMPLEX MANIPULATION

COMBINING TASK AND MOTION PLANNING FOR COMPLEX MANIPULATION COMBINING TASK AND MOTION PLANNING FOR COMPLEX MANIPULATION 16-662: ROBOT AUTONOMY PROJECT Team: Gauri Gandhi Keerthana Manivannan Lekha Mohan Rohit Dashrathi Richa Varma ABSTRACT Robots performing household

More information

Using Perspective Rays and Symmetry to Model Duality

Using Perspective Rays and Symmetry to Model Duality Using Perspective Rays and Symmetry to Model Duality Alex Wang Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2016-13 http://www.eecs.berkeley.edu/pubs/techrpts/2016/eecs-2016-13.html

More information

Mobile Robot Navigation Using Omnidirectional Vision

Mobile Robot Navigation Using Omnidirectional Vision Mobile Robot Navigation Using Omnidirectional Vision László Mornailla, Tamás Gábor Pekár, Csaba Gergő Solymosi, Zoltán Vámossy John von Neumann Faculty of Informatics, Budapest Tech Bécsi út 96/B, H-1034

More information

Computer Graphics 1. Chapter 9 (July 1st, 2010, 2-4pm): Interaction in 3D. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2010

Computer Graphics 1. Chapter 9 (July 1st, 2010, 2-4pm): Interaction in 3D. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2010 Computer Graphics 1 Chapter 9 (July 1st, 2010, 2-4pm): Interaction in 3D 1 The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons

More information

CS4495/6495 Introduction to Computer Vision

CS4495/6495 Introduction to Computer Vision CS4495/6495 Introduction to Computer Vision 9C-L1 3D perception Some slides by Kelsey Hawkins Motivation Why do animals, people & robots need vision? To detect and recognize objects/landmarks Is that a

More information

Fast Local Planner for Autonomous Helicopter

Fast Local Planner for Autonomous Helicopter Fast Local Planner for Autonomous Helicopter Alexander Washburn talexan@seas.upenn.edu Faculty advisor: Maxim Likhachev April 22, 2008 Abstract: One challenge of autonomous flight is creating a system

More information

Interactive 3D Geometrical Modelers for Virtual Reality and Design. Mark Green*, Jiandong Liang**, and Chris Shaw*

Interactive 3D Geometrical Modelers for Virtual Reality and Design. Mark Green*, Jiandong Liang**, and Chris Shaw* Interactive 3D Geometrical Modelers for Virtual Reality and Design Mark Green*, Jiandong Liang**, and Chris Shaw* *Department of Computing Science, University of Alberta, Edmonton, Canada **Alberta Research

More information

Homework #1. Displays, Image Processing, Affine Transformations, Hierarchical Modeling

Homework #1. Displays, Image Processing, Affine Transformations, Hierarchical Modeling Computer Graphics Instructor: Brian Curless CSE 457 Spring 215 Homework #1 Displays, Image Processing, Affine Transformations, Hierarchical Modeling Assigned: Thursday, April 9 th Due: Thursday, April

More information

Construction of SCARA robot simulation platform based on ROS

Construction of SCARA robot simulation platform based on ROS Construction of SCARA robot simulation platform based on ROS Yingpeng Yang a, Zhaobo Zhuang b and Ruiqi Xu c School of Shandong University of Science and Technology, Shandong 266590, China; ayangyingp1992@163.com,

More information

The Most User-Friendly 3D scanner

The Most User-Friendly 3D scanner The Most User-Friendly 3D scanner The Solutionix C500 is optimized for scanning small- to medium-sized objects. With dual 5.0MP cameras, the C500 provides excellent data quality at a high resolution. In

More information

An Epic Laser Battle

An Epic Laser Battle FLOWSTONE.qxd 2/14/2011 3:52 PM Page 20 by the staff of Robot An Epic Laser Battle BETWEEN ROBOTS Behind the Scenes! How we created vision-based programming using FlowStone Last month we introduced FlowStone,

More information

Prototype D10.2: Project Web-site

Prototype D10.2: Project Web-site EC Project 257859 Risk and Opportunity management of huge-scale BUSiness community cooperation Prototype D10.2: Project Web-site 29 Dec 2010 Version: 1.0 Thomas Gottron gottron@uni-koblenz.de Institute

More information

Graph-based Guidance in Huge Point Clouds

Graph-based Guidance in Huge Point Clouds Graph-based Guidance in Huge Point Clouds Claus SCHEIBLAUER / Michael WIMMER Institute of Computer Graphics and Algorithms, Vienna University of Technology, Austria Abstract: In recent years the use of

More information

16-662: Robot Autonomy Project Bi-Manual Manipulation Final Report

16-662: Robot Autonomy Project Bi-Manual Manipulation Final Report 16-662: Robot Autonomy Project Bi-Manual Manipulation Final Report Oliver Krengel, Abdul Zafar, Chien Chih Ho, Rohit Murthy, Pengsheng Guo 1 Introduction 1.1 Goal We aim to use the PR2 robot in the search-based-planning

More information

COS 116 The Computational Universe Laboratory 10: Computer Graphics

COS 116 The Computational Universe Laboratory 10: Computer Graphics COS 116 The Computational Universe Laboratory 10: Computer Graphics As mentioned in lecture, computer graphics has four major parts: imaging, rendering, modeling, and animation. In this lab you will learn

More information

solidthinking Environment...1 Modeling Views...5 Console...13 Selecting Objects...15 Working Modes...19 World Browser...25 Construction Tree...

solidthinking Environment...1 Modeling Views...5 Console...13 Selecting Objects...15 Working Modes...19 World Browser...25 Construction Tree... Copyright 1993-2009 solidthinking, Inc. All rights reserved. solidthinking and renderthinking are trademarks of solidthinking, Inc. All other trademarks or service marks are the property of their respective

More information

Computer Basics. Page 1 of 10. We optimize South Carolina's investment in library and information services.

Computer Basics. Page 1 of 10. We optimize South Carolina's investment in library and information services. Computer Basics Page 1 of 10 We optimize South Carolina's investment in library and information services. Rev. Oct 2010 PCs & their parts What is a PC? PC stands for personal computer. A PC is meant to

More information

W4. Perception & Situation Awareness & Decision making

W4. Perception & Situation Awareness & Decision making W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic

More information

Trash in the Dock. May 21, 2017, Beginners SIG The Dock (Part 3 of 3)

Trash in the Dock. May 21, 2017, Beginners SIG The Dock (Part 3 of 3) Note: This discussion is based on MacOS, 10.12.4 (Sierra). Some illustrations may differ when using other versions of macos or OS X. Credit: http://tidbits.com/e/17088 ( macos Hidden Treasures: Dominate

More information

Caustics - Mental Ray

Caustics - Mental Ray Caustics - Mental Ray (Working with real caustic generation) In this tutorial we are going to go over some advanced lighting techniques for creating realistic caustic effects. Caustics are the bent reflections

More information

Equipment use workshop How to use the sweet-pepper harvester

Equipment use workshop How to use the sweet-pepper harvester Equipment use workshop How to use the sweet-pepper harvester Jochen Hemming (Wageningen UR) September-2014 Working environment robot Environmental conditions Temperature: 13 to 35 C Relative Humidity:

More information