The Essential Components of Human-Friendly. Australian National University. URL:
|
|
- Rosaline Willis
- 6 years ago
- Views:
Transcription
1 The Essential Components of Human-Friendly Robot Systems Y. Matsumoto, J. Heinzmann and A.Zelinsky Research School of Information Sciences and Engineering Australian National University Canberra, ACT 0200, Australia URL: Abstract To develop human friendly robots we required two key components; visual interfaces and safe mechanisms. Visual interfaces facilitate natural and easy interfaces for human-robot interaction. Facial gestures can be a natural way to control a robot. In this paper, we report on a vision-based interface that in real-time tracks a user's facial features and gaze point. Human friendly robots must also have high integrity safety systems that ensure that people are never harmed. To guarantee human safety we require manipulator mechanisms in which all actuators are force controlled in a manner that prevents dangerous impacts with people and the environment. In this paper we report on a control scheme for the Barrett-MIT whole arm manipulator (WAM) which allows people to safely interact with the robot. 1 Introduction If robotics technology is to be introduced into the everyday human world, the technology must not only operate eciently executing complex tasks such house cleaning or putting out the garbage, the technology must be safe and easy for people to use. What constitutes a human friendly robot? Firstly, human friendly robots must be easy to use and have natural communication interfaces. People naturally express themselves through language, facial gestures and expressions. Speech recognition in controlled situations using a limited vocabulary with minimal background noise interference is now possible and could be included in human-robot interfaces. However, visionbased human-computer interfaces are only recently began to attract considerable attention [1, 2, 3]. With a visual interface a robot could recognise facial gestures Graduate School of Information Science, Nara Institute of Science and Technology, Takayamacho, Ikoma-city, Nara, , Japan such as"yes" or "no", as well being able to determine the users gaze point i.e. where the person is looking. The ability to estimate a persons gaze point is most important for a human-friendly robot. For example a robot assisting the disabled may need to pick up items that attract the users gaze. Our goal is to build \smart" visual interfaced for use in human-robot applications. In this paper we describe our most recent results in this area. The second requirement for human-friendly robots is that they must possess a safety system of high integrity that ensures the robots actions never result in physical harm to a human. One way to solve this problem is to build robots that are small, light-weight and slow moving. However, this will result in systems that wont be able to lift or carry objects of any signicance. The only alternative is to build systems that are strong and fast, but which arehuman-friendly. Current commercial robot manipulator technology use point topoint control and should be considered is human-unfriendly. Point topointcontrol is quite dangerous to use in dynamic environments. For example, if an object unexpectedly blocks a planned point to point path, either the object is smashed out of the robots paths or the robot sustaina considerable damage. Clearly, a compliant force based controller is needed. However, the commercial technology for force-based control of robots is not yet readily available. The current practice is to add force sensors, usually at the tool plate to existing robot manipulators. While this allows force control of the robots end point, other parts of the manipulator could still collide with unexpected objects. To ensure safety a Whole Arm Manipulation (WAM) approach is need, where all the robots joints are force controlled. Another important aspect is that robot systems must guard against software failures, and prevent robots from continuing to work. In this paper we describe our recent results to build a safe control architecture for a robot manipulator.
2 2 Visual Interfaces 2.1 Face and Gaze Detection There are several types of commercial products in existence to detect head position and orientation, using magnetic sensors and link mechanisms. There are also several companies supporting products that perform eye gaze tracking. These products are generally highly accurate and reliable. However, all these products require either expensive hardware and/or articial environments (helmets, infrared lighting, marking on the face etc). The restricted motion and discomfort to user caused by such equipment makes it dicult to measure natural and unihibited behavior of people. To solve this problem, there have been many research results reported that are related to the visual detection of head pose [1, 2,3,4, 5, 6, 7]. Recent advances in computer hardware have allowed researchers to develop real-time face tracking systems. However all of the previously reported systems are based on monocular vision. Recovering the 3D pose from a monocular image stream is regarded as a dicult problem in general. High accuracy as well as robustness are particularly hard to achieve. Most reported systems do not compute the full 3D 6DOF posture of heads. Researchers have developed monocular systems to detect both the head pose and gaze point simultaneously [8, 9], However, these systems do not accurately determine the 3D vector of the gaze direction. In order to construct a system which observes a person without causing any discomfort, the system should satisfy the following requirements: non-contact passive real-time robust to occlusions and lighting change compact accurate capable to detect head posture and a gaze direction simultaneously Our system simultaneously satises all the listed requirements by utilizing the following techniques: stereo vision using eld multiplexing image processing using normalized correlation 3D model tting using virtual springs 3 Real-time Vision Hardware Figure 1 illustrates the hardware setup of our realtime stereo face tracking system. It has a NTSC camera pair (SONY EVI-370DG 2) to capture the person's face. The output video signals from the cameras are multiplexed into one video signal using the \eld Stereo Camera MUX Monitor IP5000 Vision Processor PentiumII 450MHz 64MB Memory PC SGI O2 Figure 1: System conguration of the human-machine interface. multiplexing technique"[10]. The multiplexed video stream is then fed into a vision processing board (Hitach IP5000), where the position and the orientation of the face are determined. The face tracking results are visualized on a SGI O2 graphics workstation. 3.1 Hitachi IP5000 Image Processor The Hitachi IP5000 is a PCI half-sized image processing board is used in this research. The card is equipped with 40 frame memories of pixels. It provides in hardware a wide variety of fast image processing functions such as binarization, convolution, ltering, labeling, histogram calculation, color extraction and normalized correlation. The frequency of these operations is 73.5MHz, which means the card can apply a single basic function, such as binarization) to a single image in 3.6ms. 3.2 Field Multiplexing Device The eld multiplexing is a method for generating an analog multiplexed video steam from two video streams. A diagram of the device is shown in Figure 2. The device takes two video steams which are synchronized into a video switching IC. The video switcher selects one signal and uses it as odd eld of the video output, the other signal becomes the even eld of video output. Since the frequency of the switching is only 60Hz, the multiplexer can be easily and cheaply implemented using only consumer electronic parts. A photo of the device is also shown in Figure 2, its size is less than 5cm square. The advantage of multiplexing video signals in the analog phase is that such an approach can be applied to any vision system. Single video stream processing is transformed into stereo vision processing. Since the multiplexed image is stored in a single video frame memory, stereo image processing can be performed with in the memory. This means there is no overhead cost for transferring images which is inevitable in stereo vision system with two image processing boards. Thus a system with a eld multiplexing device can have a higher performance than a system with two
3 boards. A minor weak point of the eld multiplexing is that the image looks strange to human eyes if you display the signal directly on a TV monitor, because two images are superimposed every two lines. However this doesn't make image processing any harder, since a normal image can be easily obtained by subsampling the multiplexed image in the vertical direction. calculated based on stereo matching. In the manual acuisition mode, the image patches of the features are selected by simply clicking with a mouse over positions of interest in the image. Stereo matching is then performed to calculate the 3D coordinates. Field Multiplexing Device Right Camera Left Camera Vin1 (NTSC) SyncOut (NTSC) Vin2 (NTSC) Buffer Sync Separator Video Switch Buffer MixOut (NTSC) Image Processing System Feature templates Feature coordinates (-49, 15, -9) (-17, 15, -5) ( 17, 12, -4) ( 48, 10, -3) (-27,-57, -1) ( 23,-58, 0) Whole face template MixOut Vcc GND Vin1 Vin2 SyncOut Figure 2: Field Multiplexing Device. 4 Stereo Face Tracking 4.1 3D Facial Model The 3D facial model used in our stereo face tracking is composed of three components: template images of the facial features, 3D coordinates of the facial features, an image of the entire face. The facial features are dened as both of the corners of the eyes and the mouth. Thus there are six feature images and coordinates in a facial model, an example of which isshown in Figure 3. The facial model also has an image of the whole face in stored in low resolution. This image is used to search for a face at the system initialisation stage and in cases when the system feature tracking fails. The facial model can be acquired either \automatically" or \manually". In automatic acuisition mode, the eyes and mouth are detected by rst nding skin colored regions in the image and then binarizing the intensity information contained in the skin colored facial region. Small image patches at both ends of the extracted eyes and mouth are memorized as feature templates, and the 3D coordinates of the features are Figure 3: Upper: Extracted tracking features from stereo images, Lower: 3D facial model. 4.2 Stereo Tracking Algorithm The owchart describing the stereo tracking algorithm is shown in Figure 4. Before face tracking starts, the error recovery procedure is executed to determine the approximate position of the face in the live video stream using the whole face image. Feature tracking and stereo matching for each feature is carried out to determine the 3D positions of each feature. A 3D facial model is tted to the 3D measurements, and the 3D position and orientation of the face is estimated in terms of six parameters. Then the 3D coordinates for each feature are adjusted to maintain the consistency of the rigid body facial model. Finally, the 3D feature coordinates are projected back onto the 2D image plane in order to update the search area for feature tracking by the vision processor in the next frame. At the end of each tracking process cycle, the overall reliability of the face tracking is determined using the correlation values of feature tracking and stereo matching. If the reliability is higher than a preset threshold, the system returns to the beginning of the tracking process again. Otherwise the system decides
4 it has lost the face and jumps back to the error recovery phase. Stereo Video Stream right image right+left image right image right+left image 2D feature tracking 2D feature position 3D stereo matching 3D model fitting 2D projection y start 2D face searching 3D stereo matching 2D projection OK? 2D face position 3D face position 2D feature position 3D feature feature 3D face position 2D feature position n Error Recovery Face Tracking Figure 4: Tracking Algorithm D Feature Tracking face image 3D structure feature image 3D structure 3D facial model In the 3D feature tracking stage, it is assumed that each feature has only small displacement between the current frame and the previous one. The 2D position of features in the previous frame are used to determine the search area in the current frame. The feature images stored with the 3D facial model are used as templates. Images from the right camera are searched for features. The 2D features that have been found in the right image are used as templates for searching for matching images from the left camera. By stereo matching the 3D coordinates of each feature are acquired. The processing time of the whole tracking process (i.e. feature tracking + stereo matching for six features) is about 10ms by the IP D Model Fitting Figure 5 illustrates the coordinate system used to represent the position and the orientation of the face. The parameters (; ; ') represents the orientation of the face, and (x,y,z) represents the position of the face center relative to the the origin of the camera axis. The diagrams in Figure 6 describe the model tting scheme used in our system. In the actual implementation six features are used for tracking, however Z Y O Translation X (x,y,z) Rotation (φ,θ,ϕ) Figure 5: Coordinate system for face tracking. only three points are illustrated in the diagrams for simplicity. The basic idea of the model tting is to iteratively move the model closer to the system measurements while considering the reliability results of feature tracking. As mentioned before, the face is assumed to have a small motion between the frames. This means there can be only small displacements in terms of the position and the orientation, which is described as (x,y,z,; ; ') in Figure 6(1). The position and the orientation determined in the previous frame (at time t) are used to rotate and translate the data set of measurements from the vision system to the same coordinate space as the model, as shown in Figure 6(2). After the rotation and translation, the data set of measurements have a small disparity from the model due to the motion which has occurred during the interval t. Fine tting of the model is performed next. To realize a robust tting of the model, it is essential to take the reliability values of the individual measurements into account. The least squares method is usually adopted for such purposes. In our system, a similar tting approach based on virtual springs is used. The result of 3D tracking yields two correlation values (for the left and right images) which are between 0 and 1 for each feature. If a template and another matching region have exactly the same pattern, then the resulting image correlation value is 1. A value of 0 represents correlation where all pixels in the two correlation images are maximally dierent. The product of the two correlation values for each feature can be regarded as a reliability value. The reliability values are used as the parameters of stiness of springs that link each feature in the model. The spring based model iting is shown in Figure 6(3). The model is iteratively rotated and translated in order to reduce the elastic energy of the springs. Using the tracking reliability as a spring constants makes the results of model tting insensitive to the partial matching failures, and ensures robust face tracking. The processing time of the iterative model tting takes less than 2ms using a PentiumII 450MHz processor.
5 (1) Z (2) Z (3) Z k2 Y O model Y F2 O Y O model Pos (0,0,0) Orient (0,0,0) displacement due to ( x, y, z) and ( φ, θ, ϕ) still remains F1 k1 At time model object Pos (x+ x, y+ y, z+ z) y Orient (φ+ φ, θ+ θ, ϕ+ ϕ) x o z Rot ( φ, θ, ϕ) Trans (-x,-y,-z) X X measurement k3 F3 object y o z X t At time t + t Pos (x,y,z) Orient (φ,θ,ϕ) x 3D model fitting based on virtual springs k n Stiffness of spring correlation value The whole face image shown in Figure 3 is used in this process. In order to reduce the processing time, the template is memorized in low resolution. The live video streams is also reduced in resolution. The template is rst searched in the right image, and then the matched image is searched in the left image. As a result, the rough 3D position of the face is determined and is then used as the initial state of the face for the face tracking. This searching process takes about 100ms. 5 Implementation Results 5.1 Face Tracking Some snapshots of the results of the real-time face tracking system are shown Figure 7. Images (1) and (2) in Figure 7 show results when the face rotates, while (3) and (4) show results when the face moves closer to and further from the camera. The whole tracking process takes about 30ms which iswithina NTSC video frame rate. The accuracy of the tracking is approximately 1mm in translation and 1 in rotation. The snapshots in Figure 8 show the results of tracking when there is some deformation of the facial features and partial occlusions of the face by a hand. The results indicate our tracking system works quite robustly in such situations owing to our model tting method. By utilizing the normalized correlation function on the IP5000, the tracking system is tolerant of the uctuation in lighting. (1) (2) Figure 6: 3D model tting algorithm. 4.3 Error Recovery The tracking method described above has only a small search area in the image, which enable the real-time processing and continuous stable result of tracking. However once the system fails to track the face, it is hard for the system to make a recovery by using only the local template matching, and a complementary method for nding the face in the image is necessary as a error recovery function. This process is also used at the beginning of the tracking when the position of the face is unknown. (3) (4) Figure 7: Face tracking with various head poses.
6 (1) (2) (3) (4) Figure 8: Face tracking with feature deformation and occlusion. Figure 9: Visualization of tracking results. 5.2 Visualization The results of the tracking are visualized using a SGI O2 graphics workstation. Figure 9 illustrates examples of the tracking results and the corresponding visualization. The 3D model used in the visualization consists of rigid surface of the face and two eyeballs. The face has six DOF for position and orientation, and the eyeballs have two DOF respectively. The position of the irises of the eyes is detected using the circular hough transform, which are used to move the eyes of the mannequin head. The visualization process is performed online during the tracking, therefore the mannequin head can mimic the person's head and eye motions in real-time. 5.3 Gaze Detection By merging the information on the position and orientation of the face and the position of the pupils of the eyes, the 3D gaze vector can be computed. Figure 10 shows snapshots from a 3D gaze detection experiment. Detecting the position of the irises takes about 2ms and the full face tracking and 3D gaze detection system runs at 20Hz. The goal of this research is to be develop a system that can be used as a visual interface between a human and a robot. However, the system has obvious other uses. It could be applied to measuring human performance in psychological experiments, ergonomic design, as well as products for the disabled and the amusement industry. Figure 10: Gaze detection. 6 Human-friendly robots The goal of this research is to construct capable robots (as opposed to small, intrinsically safe toys) that are able to interact physically with humans in a safe way. Consider the motor vehicle, despite the potential dangers a car poses to it's driver and people in its vicinity, most people feel comfortable and safe while driving cars or walking on the side of streets. The reason for this is that a human driver is able to control the overall behaviour of the car using the steering wheel, brake and throttle. Various electronic and mechanical systems are providing specialised functions such as clutch and gear handling in automatics and anti locking systems. It is of paramount importance that the driver is always in perfect in control of the overall behaviour of technical system, and that he/she can understand and predict how the overall system behaves and how the
7 control inputs relate to this behaviour. This mechanism fails when a driver makes an emergency stop without ABS and the car does not respond to steering actions anymore. It is counterintuitive to release the brakes in an emergency situation to regain steering control. A similar approach to making technical systems understandable is appropriate for the research area of human-friendly robots. It is of major importance that the user is in control of the overall behaviour and is able to predict the system's response. 6.1 Basic Behaviour The basic behaviour of the robot is to compensate for gravity (Zero-G behaviour). In this mode the robot is completely passive and appears to be weightless. A human can easily move the robot around by pushing the links. The motion of the robot is slowed down only by the (low) friction in the joints and when pushed the robot keeps moving until it collides with an obstacle or reaches the joint limits. This behaviour is achieved by feed-forward control only. We consider this mode to be safe in terms of humanfriendly robots. Since the robot is completely passive, all actions of the robot are directly initiated by the operator. Also, the behaviour of the robot can be easily predicted even by non-experts. The robot can only cause harm when abused by the operator. Thus, a passive robot in Zero-G mode is safe in terms of humanrobot interaction. However, autonomous motion initiated by the robot has to be considered potentially dangerous. To ensure safe operation, restrictions on the robots actions have to be guaranteed at all times. The underlying theory to derive appropriate joint torques is well understood and can be found in the literature, see [11]. 6.2 Safe Motion Control By specifying additional torques parallel to the torques required for the gravity compensation the robot can achieve self-controlled motions. This is required for all tasks in which the robot moves by itself. The threat posed by self-controlled motion is the collision of the robot with a human. With todays technology it is not feasible to detect the presence and position of humans in the robots workspace reliably. Collisions of the robot with the operator can not be ruled out, and therefore, the threat posed by such a collision has to be restricted to an acceptable level. 6.3 Impact force The impact force provides a good measure for how dangerous a potential collision with a person could be. The collision between the mechanical system and an obstacle can occur at any point p on the surface of the mechanical system. The impact force ^F of a particular point p on the surface of a serial manipulator with an stationary obstacle can be found using equations derived by Walker et al. [12] ^F =,(1 + e)v t n n t J p ()I,1 ()J t p ()n (1) Here, v 2 IR 3 is the velocity of the mechanical system at the point p of impact, 2 IR n are the joint angles, J p () is the manipulator Jacobian for the point p in the mechanical system where the contact occurs, I() isthenn inertia matrix, and n 2 IR 3 is the unit contact normal vector. In the worst case v is aligned with n but pointing in the opposite direction (v =,s v n);s v = kvk and the impact is elastic (e = 1). In this worst case scenario 1 simplies to: ^F max = = =,2v t (, 1 s v v) (, 1 s v v t )J p ()I,1 ()J t p()(, 1 s v v) (2) 2s v v t v v t J p ()I,1 ()J t p ()v (3) 2s 3 v v t J p ()I,1 ()J t p ()v (4) With v = J() _ and sv = kvk = kj() k, _ ^F max can be expressed as a function of the state (; )ofthe _ mechanical system. ^F max = 2kJ p () _ k 3 _ t J t p()j p ()I,1 ()J t p()j p () _ (5) ^F max is undened for v = J() _ = 0. However, for a stationary point the impact energy with a stationary obstacle equals 0 and ^F max is dened accordingly. 6.4 Formal denition of Impactergy To describe the ability of a robot to cause impact we introduce the term Impactergy. We dene Impactergy to be the maximum impact force that a moving mechanical system can cause with a static environment. In this case we have to consider all points p 2 P on the surface of the robot. This makes ^F max : P IR n IR n 7! IR a function of the point p and the dynamic state (; ) _ of the system. We can now dene the Impactergy of the system as =sup p ; where p = ^F max (p; ; ) _ (6) p2p The Impactergy is a well-dened scalar value that provides a upper limit for any impactofthesystem
8 Dynamics / Space Conv. Robot programming layer Safety Envelope : Limit additional motor torques Velocity Guard disable Zero-G (Grav. Comp.) External Forces (Humans) disable Obstacles Torque Ripple Compensation Safety Hearbeat Figure 11: Safety software architecture for the control of the WAM robot. with a static object. Since it assumes elastic impact and aligned surface normals, the limit is rather conservative. We can now dene a limit max for the impact force which can be considered safe in the event of a collision of the robot with a person. The maximum impactergy we allow for a robot denes the safe region in the state space where the following inequality holds p max 8p 2 P (7) 6.5 Controlling the impact The goal of the novel Impact Control-scheme is to ensure that the robot can not leave the safe region in state space. However, external forces may still force the robot outside the safe region, eg when pushed by the operator. The Impact Controller acts as a saturating lter between the motion control algorithm and the robot (refer to Figure 11). It provides a safety envelope which encapsulates the robot. The impact controller passes the torque vector generated by the motion control algorithm unchanged to the robot as long as it complies with the safety constraints set in the impact controller. If the desired torque vector does not satisfy the safety constraints, a clipping function has to be applied to the torque vector. The resulting safe torque vector guarantees that the robot can not leave the safe region. A suitable state-dependent control constraint is de- ned in the following inequality. d dt p 1 t c ( max, p ) 8p 2 P (8) The time constant t c denes how fast the robot may gain impactergy. It can be set in a way that the motion of the robot is predictable for the operator. The clipping function has to project the desired torque vector from the motion control algorithm into the safe region in the torque vector space. One solution is to nd the safe torque vector closest to the desired torque vector. Note that it is not always suf- cient to scale down the desired torque vector since the 0-torque vector is not always in the safe region. Amoving robot in Zero-G can increase its impactergy without any further torques applied to the system. In this case the Impact Controller will slow the robot down to enforce the safe region constraints. If the robot is pushed out of the safe region by external forces, the right side of Equation 8 becomes negative and the Impact-Controller enforces a reduction of the impactergy according to the time constant t c. 6.6 Implementing Impact Control To implement the proposed scheme which guarantees limited impactergy during autonomous motions of the robot the constraints have to be transfered into motor torque space. Considering that p (t) = p ((t); _ (t)), Equation 8 can be expanded to the following p (; ) _ p(; 1 ( max, p (; )) _ 8p 2 P (9) t c Inequality 9 represents a system of state dependent constraints on the accelerations _ of the joints. Using the dynamic model = M,1 ()(, N(; _ )) of the manipulator allows to expand (9) _ i hm,1 ()(, N(; ) _ p 1 ( max, p ) 8p 2 P (10) t c The Inequality 10 denes closed halfspaces in the torque space. The safe region in torque space is the intersection of the halfspaces. Note that the intersection of halfspaces is always convex.
9 6.7 Work in Progress The safety envelope formulation has been implemented and tested on a dynamic simulation of a planar 2DOF robot. The system is currently being implemented on a real robot The hardware platform for this project is a Barrett WAM robot, the commercial version of the MIT arm built by Townsend and Salisbury [13]. It has 7DOF and all joints are driven by cable transmissions. This allows very low friction, and thus, the robot can easily be mechanically backdriven by a person. 7 Conclusions In this paper, we presented a vision based humanmachine interface. The interface is able to in realtime reliably, accurately and robustly detect the position and orientation of the face. The system is nonintrusive and passive, thereby making it a natural interface. We also presented a new approach to humanfriendly robot control. We dened sensible goals for a human-friendly robot scheme, namely to ensure safety of all autonomous actions of the robot, while the user has the responsibility to use the robot safely. The basic Zero-G oating behaviour ensures that a user can understand and predict the robot's actions. The safetyof autonomous motion is guaranteed by the safety envelope. It guarantees that the maximum impact energy of the robot does not exceed a preset limit. References [1] A.Azarbayejani, T.Starner, B.Horowitz, and A.Pentland. Visually controlled graphics. IEEE Trans. on Pattern Analysis and Machine Intelligence, 15(6):602{605, [5] S.Bircheld and C.Tomasi. Elliptical Head Tracking Using Intensity Gradients and Color Histograms". In Proc. of Computer Vision and Pattern Recognition (CVPR'98), [6] A.Gee and R.Cipolla. Fast Visual Tracking by Temporal Consensus. Image and Vision Computing, 14(2):105{114, [7] Kentaro Toyama. Look, Ma { No Hands! Hands- Free Cursor Control with Real-time 3D Face Tracking. In Proc. of Workshop on Perceptual User Interface (PUI'98), [8] J.Heinzmann and A.Zelinsky. 3-D Facial Pose and Gaze Point Estimation using a Robust Real-Time Tracking Paradigm. In Proc. of the Int. Conf. on Automatic Face and Gesture Recognition, [9] R.Stiefelhagan, J.Yang, and A.Waibel. Tracking Eyes and Monitoring Eye Gaze. In Proc. of Workshop on Perceptual User Interface (PUI'97), [10] Y. Matsutmoto, T. Shibata, K. Sakai, M. Inaba, and H. Inoue. Real-time Color Stereo Vision System for a Mobile Robot based on Field Multiplexing. In Proc. of IEEE Int. Conf. on Robotics and Automation, pages 1934{1939, [11] John J. Craig. Introduction to Robotics. ln Addison Wesley, 2nd edition, [12] Ian D. Walker. Impact congurations and measures for kinematically redundant and multiple armed robot systems. IEEE Trans. Robotics and Automation, 10(5):670{683, [13] W.T.Townsend and J.K.Salisbury. Mechanical Design for Whole-Arm Manipulation. In Robots and Biological Systems: Toward a New Bionics?, pages 153{164, [2] A.Zelinsky and J.Heinzmann. Real-time Visual Recognition of Facial Gestures for Human Computer Interaction. In Proc. of the Int. Conf. on Automatic Face and Gesture Recognition, pages 351{ 356, [3] P.Ballard and G.C.Stockman. Controlling a Computer via Facial Aspect. IEEE Trans. Sys. Man and Cybernetics, 25(4):669{677, [4] Black and Yaccob. Tracking and Recognizing Rigid and Non-rigid Facial Motions Using Parametric Models of Image Motion. In Proc. of Int. Conf. on Computer Vision (ICCV'95), pages 374{381, 1995.
An Algorithm for Real-time Stereo Vision Implementation of Head Pose and Gaze Direction Measurement
An Algorithm for Real-time Stereo Vision Implementation of Head Pose and Gaze Direction Measurement oshio Matsumoto, Alexander elinsky Nara Institute of Science and Technology 8916-5 Takayamacho, Ikoma-city,
More informationAn Interactive Technique for Robot Control by Using Image Processing Method
An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.
More informationMOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE
Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,
More informationADVANCES IN ROBOT VISION: MECHANISMS AND ALGORITHMS
/ First Asian Symposium on Industrial Automation and Robotics ADVANCES IN ROBOT VISION: MECHANISMS AND ALGORITHMS Alexander Zelinsky and John B. Moore Research School of Information Sciences and Engineering
More informationPRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using
PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,
More informationCecilia Laschi The BioRobotics Institute Scuola Superiore Sant Anna, Pisa
University of Pisa Master of Science in Computer Science Course of Robotics (ROB) A.Y. 2016/17 cecilia.laschi@santannapisa.it http://didawiki.cli.di.unipi.it/doku.php/magistraleinformatica/rob/start Robot
More informationLecture VI: Constraints and Controllers
Lecture VI: Constraints and Controllers Motion Constraints In practice, no rigid body is free to move around on its own. Movement is constrained: wheels on a chair human body parts trigger of a gun opening
More informationMotion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm Yuji
More informationRobotics (Kinematics) Winter 1393 Bonab University
Robotics () Winter 1393 Bonab University : most basic study of how mechanical systems behave Introduction Need to understand the mechanical behavior for: Design Control Both: Manipulators, Mobile Robots
More informationResearch Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)
Research Subject Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) (1) Goal and summary Introduction Humanoid has less actuators than its movable degrees of freedom (DOF) which
More informationA Real Time System for Detecting and Tracking People. Ismail Haritaoglu, David Harwood and Larry S. Davis. University of Maryland
W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David Harwood and Larry S. Davis Computer Vision Laboratory University of Maryland College Park, MD
More informationProceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives
Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives
More informationLocal qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:
Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu
More information3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1. A Real Time System for Detecting and Tracking People
3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1 W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David
More informationVisual Servoing Utilizing Zoom Mechanism
IEEE Int. Conf. on Robotics and Automation 1995, pp.178 183, Nagoya, May. 12 16, 1995 1 Visual Servoing Utilizing Zoom Mechanism Koh HOSODA, Hitoshi MORIYAMA and Minoru ASADA Dept. of Mechanical Engineering
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationTracking of Human Body using Multiple Predictors
Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,
More informationGrasp Recognition using a 3D Articulated Model and Infrared Images
Grasp Recognition using a 3D Articulated Model and Infrared Images Koichi Ogawara Institute of Industrial Science, Univ. of Tokyo, Tokyo, Japan Jun Takamatsu Institute of Industrial Science, Univ. of Tokyo,
More information2 ATTILA FAZEKAS The tracking model of the robot car The schematic picture of the robot car can be seen on Fig.1. Figure 1. The main controlling task
NEW OPTICAL TRACKING METHODS FOR ROBOT CARS Attila Fazekas Debrecen Abstract. In this paper new methods are proposed for intelligent optical tracking of robot cars the important tools of CIM (Computer
More informationVisual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing
Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Minoru Asada, Takamaro Tanaka, and Koh Hosoda Adaptive Machine Systems Graduate School of Engineering Osaka University, Suita,
More informationA Stereo Vision-based Mixed Reality System with Natural Feature Point Tracking
A Stereo Vision-based Mixed Reality System with Natural Feature Point Tracking Masayuki Kanbara y, Hirofumi Fujii z, Haruo Takemura y and Naokazu Yokoya y ygraduate School of Information Science, Nara
More informationLecture VI: Constraints and Controllers. Parts Based on Erin Catto s Box2D Tutorial
Lecture VI: Constraints and Controllers Parts Based on Erin Catto s Box2D Tutorial Motion Constraints In practice, no rigid body is free to move around on its own. Movement is constrained: wheels on a
More informationLesson 1: Introduction to Pro/MECHANICA Motion
Lesson 1: Introduction to Pro/MECHANICA Motion 1.1 Overview of the Lesson The purpose of this lesson is to provide you with a brief overview of Pro/MECHANICA Motion, also called Motion in this book. Motion
More informationTechniques. IDSIA, Istituto Dalle Molle di Studi sull'intelligenza Articiale. Phone: Fax:
Incorporating Learning in Motion Planning Techniques Luca Maria Gambardella and Marc Haex IDSIA, Istituto Dalle Molle di Studi sull'intelligenza Articiale Corso Elvezia 36 - CH - 6900 Lugano Phone: +41
More informationSelf-Collision Detection and Prevention for Humanoid Robots. Talk Overview
Self-Collision Detection and Prevention for Humanoid Robots James Kuffner, Jr. Carnegie Mellon University Koichi Nishiwaki The University of Tokyo Satoshi Kagami Digital Human Lab (AIST) Masayuki Inaba
More informationDesign Specication. Group 3
Design Specication Group 3 September 20, 2012 Project Identity Group 3, 2012/HT, "The Robot Dog" Linköping University, ISY Name Responsibility Phone number E-mail Martin Danelljan Design 072-372 6364 marda097@student.liu.se
More informationof human activities. Our research is motivated by considerations of a ground-based mobile surveillance system that monitors an extended area for
To Appear in ACCV-98, Mumbai-India, Material Subject to ACCV Copy-Rights Visual Surveillance of Human Activity Larry Davis 1 Sandor Fejes 1 David Harwood 1 Yaser Yacoob 1 Ismail Hariatoglu 1 Michael J.
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationDense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera
Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information
More informationSkill. Robot/ Controller
Skill Acquisition from Human Demonstration Using a Hidden Markov Model G. E. Hovland, P. Sikka and B. J. McCarragher Department of Engineering Faculty of Engineering and Information Technology The Australian
More informationVision-based Manipulator Navigation. using Mixtures of RBF Neural Networks. Wolfram Blase, Josef Pauli, and Jorg Bruske
Vision-based Manipulator Navigation using Mixtures of RBF Neural Networks Wolfram Blase, Josef Pauli, and Jorg Bruske Christian{Albrechts{Universitat zu Kiel, Institut fur Informatik Preusserstrasse 1-9,
More informationDepartment of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan
Shape Modeling from Multiple View Images Using GAs Satoshi KIRIHARA and Hideo SAITO Department of Electrical Engineering, Keio University 3-14-1 Hiyoshi Kouhoku-ku Yokohama 223, Japan TEL +81-45-563-1141
More informationAdvanced Imaging Applications on Smart-phones Convergence of General-purpose computing, Graphics acceleration, and Sensors
Advanced Imaging Applications on Smart-phones Convergence of General-purpose computing, Graphics acceleration, and Sensors Sriram Sethuraman Technologist & DMTS, Ittiam 1 Overview Imaging on Smart-phones
More informationPartitioning Contact State Space Using the Theory of Polyhedral Convex Cones George V Paul and Katsushi Ikeuchi
Partitioning Contact State Space Using the Theory of Polyhedral Convex Cones George V Paul and Katsushi Ikeuchi August 1994 CMU-RI-TR-94-36 Robotics Institute Carnegie Mellon University Pittsburgh, PA
More informationTask analysis based on observing hands and objects by vision
Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In
More informationTemplate Matching Rigid Motion. Find transformation to align two images. Focus on geometric features
Template Matching Rigid Motion Find transformation to align two images. Focus on geometric features (not so much interesting with intensity images) Emphasis on tricks to make this efficient. Problem Definition
More informationSingularity Handling on Puma in Operational Space Formulation
Singularity Handling on Puma in Operational Space Formulation Denny Oetomo, Marcelo Ang Jr. National University of Singapore Singapore d oetomo@yahoo.com mpeangh@nus.edu.sg Ser Yong Lim Gintic Institute
More informationTemplate Matching Rigid Motion
Template Matching Rigid Motion Find transformation to align two images. Focus on geometric features (not so much interesting with intensity images) Emphasis on tricks to make this efficient. Problem Definition
More informationTracking Multiple Objects in 3D. Coimbra 3030 Coimbra target projection in the same image position (usually
Tracking Multiple Objects in 3D Jo~ao P. Barreto Paulo Peioto Coimbra 33 Coimbra 33 Jorge Batista Helder Araujo Coimbra 33 Coimbra 33 Abstract In this paper a system for tracking multiple targets in 3D
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationA Simple Technique to Passively Gravity-Balance Articulated Mechanisms
A Simple Technique to Passively Gravity-Balance Articulated Mechanisms Tariq Rahman, Ph.D. Research Engineer University of Delaware/A.I. dupont Institute Applied Science and Engineering Laboratories A.I.
More informationDevelopment of an optomechanical measurement system for dynamic stability analysis
Development of an optomechanical measurement system for dynamic stability analysis Simone Pasinetti Dept. of Information Engineering (DII) University of Brescia Brescia, Italy simone.pasinetti@unibs.it
More informationy 2 x 1 Simulator Controller client (C,Java...) Client Ball position joint 2 link 2 link 1 0 joint 1 link 0 (virtual) State of the ball
Vision-based interaction with virtual worlds for the design of robot controllers D. d'aulignac, V. Callaghan and S. Lucas Department of Computer Science, University of Essex, Colchester CO4 3SQ, UK Abstract
More informationFitting (LMedS, RANSAC)
Fitting (LMedS, RANSAC) Thursday, 23/03/2017 Antonis Argyros e-mail: argyros@csd.uoc.gr LMedS and RANSAC What if we have very many outliers? 2 1 Least Median of Squares ri : Residuals Least Squares n 2
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationProperties of Hyper-Redundant Manipulators
Properties of Hyper-Redundant Manipulators A hyper-redundant manipulator has unconventional features such as the ability to enter a narrow space while avoiding obstacles. Thus, it is suitable for applications:
More informationAssist System for Carrying a Long Object with a Human - Analysis of a Human Cooperative Behavior in the Vertical Direction -
Assist System for Carrying a Long with a Human - Analysis of a Human Cooperative Behavior in the Vertical Direction - Yasuo HAYASHIBARA Department of Control and System Engineering Toin University of Yokohama
More informationChapter 4 Dynamics. Part Constrained Kinematics and Dynamics. Mobile Robotics - Prof Alonzo Kelly, CMU RI
Chapter 4 Dynamics Part 2 4.3 Constrained Kinematics and Dynamics 1 Outline 4.3 Constrained Kinematics and Dynamics 4.3.1 Constraints of Disallowed Direction 4.3.2 Constraints of Rolling without Slipping
More informationLOCALIZATION OF FACIAL REGIONS AND FEATURES IN COLOR IMAGES. Karin Sobottka Ioannis Pitas
LOCALIZATION OF FACIAL REGIONS AND FEATURES IN COLOR IMAGES Karin Sobottka Ioannis Pitas Department of Informatics, University of Thessaloniki 540 06, Greece e-mail:fsobottka, pitasg@zeus.csd.auth.gr Index
More informationAutomatic Control Industrial robotics
Automatic Control Industrial robotics Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Prof. Luca Bascetta Industrial robots
More informationWritten exams of Robotics 2
Written exams of Robotics 2 http://www.diag.uniroma1.it/~deluca/rob2_en.html All materials are in English, unless indicated (oldies are in Year Date (mm.dd) Number of exercises Topics 2018 07.11 4 Inertia
More informationProc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992
Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp.957-962, Kobe, Japan, September 1992 Tracking a Moving Object by an Active Vision System: PANTHER-VZ Jun Miura, Hideharu Kawarabayashi,
More informationSimulation-Based Design of Robotic Systems
Simulation-Based Design of Robotic Systems Shadi Mohammad Munshi* & Erik Van Voorthuysen School of Mechanical and Manufacturing Engineering, The University of New South Wales, Sydney, NSW 2052 shadimunshi@hotmail.com,
More informationInverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations
Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Peter Englert Machine Learning and Robotics Lab Universität Stuttgart Germany
More informationFORCE CONTROL OF LINK SYSTEMS USING THE PARALLEL SOLUTION SCHEME
FORCE CONTROL OF LIN SYSTEMS USING THE PARALLEL SOLUTION SCHEME Daigoro Isobe Graduate School of Systems and Information Engineering, University of Tsukuba 1-1-1 Tennodai Tsukuba-shi, Ibaraki 35-8573,
More informationLocalization of Wearable Users Using Invisible Retro-reflective Markers and an IR Camera
Localization of Wearable Users Using Invisible Retro-reflective Markers and an IR Camera Yusuke Nakazato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science
More informationExam in DD2426 Robotics and Autonomous Systems
Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20
More informationCooperative Conveyance of an Object with Tethers by Two Mobile Robots
Proceeding of the 11th World Congress in Mechanism and Machine Science April 1-4, 2004, Tianjin, China China Machine Press, edited by Tian Huang Cooperative Conveyance of an Object with Tethers by Two
More informationImage-Based Memory of Environment. homing uses a similar idea that the agent memorizes. [Hong 91]. However, the agent nds diculties in arranging its
Image-Based Memory of Environment Hiroshi ISHIGURO Department of Information Science Kyoto University Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp Saburo TSUJI Faculty of Systems Engineering
More informationto be known. Let i be the leg lengths (the distance between A i and B i ), X a 6-dimensional vector dening the pose of the end-eector: the three rst c
A formal-numerical approach to determine the presence of singularity within the workspace of a parallel robot J-P. Merlet INRIA Sophia-Antipolis France Abstract: Determining if there is a singularity within
More informationRedundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators
56 ICASE :The Institute ofcontrol,automation and Systems Engineering,KOREA Vol.,No.1,March,000 Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically
More informationNovel Collision Detection Index based on Joint Torque Sensors for a Redundant Manipulator
3 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) November 3-7, 3. Tokyo, Japan Novel Collision Detection Index based on Joint Torque Sensors for a Redundant Manipulator Sang-Duck
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationDevelopment of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras
Proceedings of the 5th IIAE International Conference on Industrial Application Engineering 2017 Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Hui-Yuan Chan *, Ting-Hao
More informationBinocular Tracking Based on Virtual Horopters. ys. Rougeaux, N. Kita, Y. Kuniyoshi, S. Sakane, yf. Chavand
Binocular Tracking Based on Virtual Horopters ys. Rougeaux, N. Kita, Y. Kuniyoshi, S. Sakane, yf. Chavand Autonomous Systems Section ylaboratoire de Robotique d'evry Electrotechnical Laboratory Institut
More informationA Fast and Stable Approach for Restoration of Warped Document Images
A Fast and Stable Approach for Restoration of Warped Document Images Kok Beng Chua, Li Zhang, Yu Zhang and Chew Lim Tan School of Computing, National University of Singapore 3 Science Drive 2, Singapore
More informationSelf-Collision Detection. Planning for Humanoid Robots. Digital Human Research Center. Talk Overview
Self-Collision Detection and Motion Planning for Humanoid Robots James Kuffner (CMU & AIST Japan) Digital Human Research Center Self-Collision Detection Feature-based Minimum Distance Computation: Approximate
More informationUsing Artificial Neural Networks for Prediction Of Dynamic Human Motion
ABSTRACT Using Artificial Neural Networks for Prediction Of Dynamic Human Motion Researchers in robotics and other human-related fields have been studying human motion behaviors to understand and mimic
More informationTransactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN
Hopeld Network for Stereo Correspondence Using Block-Matching Techniques Dimitrios Tzovaras and Michael G. Strintzis Information Processing Laboratory, Electrical and Computer Engineering Department, Aristotle
More informationRobots are built to accomplish complex and difficult tasks that require highly non-linear motions.
Path and Trajectory specification Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Specifying the desired motion to achieve a specified goal is often a
More informationAn Interactive Software Environment for Gait Generation and Control Design of Sony Legged Robots
An Interactive Software Environment for Gait Generation and Control Design of Sony Legged Robots Dragos Golubovic and Huosheng Hu Department of Computer Science, University of Essex, Colchester CO4 3SQ,
More informationManipulator Dynamics: Two Degrees-of-freedom
Manipulator Dynamics: Two Degrees-of-freedom 2018 Max Donath Manipulator Dynamics Objective: Calculate the torques necessary to overcome dynamic effects Consider 2 dimensional example Based on Lagrangian
More informationFundamental Technologies Driving the Evolution of Autonomous Driving
426 Hitachi Review Vol. 65 (2016), No. 9 Featured Articles Fundamental Technologies Driving the Evolution of Autonomous Driving Takeshi Shima Takeshi Nagasaki Akira Kuriyama Kentaro Yoshimura, Ph.D. Tsuneo
More informationRobot Vision without Calibration
XIV Imeko World Congress. Tampere, 6/97 Robot Vision without Calibration Volker Graefe Institute of Measurement Science Universität der Bw München 85577 Neubiberg, Germany Phone: +49 89 6004-3590, -3587;
More informationVisual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech
Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The
More informationTable of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE
Chapter 1. Modeling and Identification of Serial Robots.... 1 Wisama KHALIL and Etienne DOMBRE 1.1. Introduction... 1 1.2. Geometric modeling... 2 1.2.1. Geometric description... 2 1.2.2. Direct geometric
More informationSPARTAN ROBOTICS FRC 971
SPARTAN ROBOTICS FRC 971 Controls Documentation 2015 Design Goals Create a reliable and effective system for controlling and debugging robot code that provides greater flexibility and higher performance
More information10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators
Robotics and automation Dr. Ibrahim Al-Naimi Chapter two Introduction To Robot Manipulators 1 Robotic Industrial Manipulators A robot manipulator is an electronically controlled mechanism, consisting of
More informationManipulator Path Control : Path Planning, Dynamic Trajectory and Control Analysis
Manipulator Path Control : Path Planning, Dynamic Trajectory and Control Analysis Motion planning for industrial manipulators is a challenging task when obstacles are present in the workspace so that collision-free
More informationMotion Estimation for Video Coding Standards
Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression
More informationFoveated Vision and Object Recognition on Humanoid Robots
Foveated Vision and Object Recognition on Humanoid Robots Aleš Ude Japan Science and Technology Agency, ICORP Computational Brain Project Jožef Stefan Institute, Dept. of Automatics, Biocybernetics and
More information3D Perception. CS 4495 Computer Vision K. Hawkins. CS 4495 Computer Vision. 3D Perception. Kelsey Hawkins Robotics
CS 4495 Computer Vision Kelsey Hawkins Robotics Motivation What do animals, people, and robots want to do with vision? Detect and recognize objects/landmarks Find location of objects with respect to themselves
More information6-dof Eye-vergence visual servoing by 1-step GA pose tracking
International Journal of Applied Electromagnetics and Mechanics 52 (216) 867 873 867 DOI 1.3233/JAE-16225 IOS Press 6-dof Eye-vergence visual servoing by 1-step GA pose tracking Yu Cui, Kenta Nishimura,
More informationA 3-D Scanner Capturing Range and Color for the Robotics Applications
J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,
More informationIntuitive Human-Robot Interaction through Active 3D Gaze Tracking
Intuitive Human-Robot Interaction through Active 3D Gaze Tracking Rowel Atienza and Alexander Zelinsky Research School of Information Sciences and Engineering The Australian National University Canberra,
More informationA Robust Method of Facial Feature Tracking for Moving Images
A Robust Method of Facial Feature Tracking for Moving Images Yuka Nomura* Graduate School of Interdisciplinary Information Studies, The University of Tokyo Takayuki Itoh Graduate School of Humanitics and
More informationDEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE. Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1
DEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1 Dept. of Electrical Engineering and Computer Science Korea Advanced Institute of Science
More informationIVR: Open- and Closed-Loop Control. M. Herrmann
IVR: Open- and Closed-Loop Control M. Herrmann Overview Open-loop control Feed-forward control Towards feedback control Controlling the motor over time Process model V B = k 1 s + M k 2 R ds dt Stationary
More informationBitangent 3. Bitangent 1. dist = max Region A. Region B. Bitangent 2. Bitangent 4
Ecient pictograph detection Dietrich Buesching TU Muenchen, Fakultaet fuer Informatik FG Bildverstehen 81667 Munich, Germany email: bueschin@informatik.tu-muenchen.de 1 Introduction Pictographs are ubiquitous
More informationREDUCED END-EFFECTOR MOTION AND DISCONTINUITY IN SINGULARITY HANDLING TECHNIQUES WITH WORKSPACE DIVISION
REDUCED END-EFFECTOR MOTION AND DISCONTINUITY IN SINGULARITY HANDLING TECHNIQUES WITH WORKSPACE DIVISION Denny Oetomo Singapore Institute of Manufacturing Technology Marcelo Ang Jr. Dept. of Mech. Engineering
More informationKinematics. Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position.
Kinematics Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position. 1/31 Statics deals with the forces and moments which are aplied on the mechanism
More informationImage Processing Methods for Interactive Robot Control
Image Processing Methods for Interactive Robot Control Christoph Theis 1, Ioannis Iossifidis 2 and Axel Steinhage 3 1,2 Institut für Neuroinformatik, Ruhr-Univerität Bochum, Germany 3 Infineon Technologies
More informationAUTONOMOUS PLANETARY ROVER CONTROL USING INVERSE SIMULATION
AUTONOMOUS PLANETARY ROVER CONTROL USING INVERSE SIMULATION Kevin Worrall (1), Douglas Thomson (1), Euan McGookin (1), Thaleia Flessa (1) (1)University of Glasgow, Glasgow, G12 8QQ, UK, Email: kevin.worrall@glasgow.ac.uk
More informationA simple example. Assume we want to find the change in the rotation angles to get the end effector to G. Effect of changing s
CENG 732 Computer Animation This week Inverse Kinematics (continued) Rigid Body Simulation Bodies in free fall Bodies in contact Spring 2006-2007 Week 5 Inverse Kinematics Physically Based Rigid Body Simulation
More informationSimple and Robust Tracking of Hands and Objects for Video-based Multimedia Production
Simple and Robust Tracking of Hands and Objects for Video-based Multimedia Production Masatsugu ITOH Motoyuki OZEKI Yuichi NAKAMURA Yuichi OHTA Institute of Engineering Mechanics and Systems University
More informationROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino
ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Control Part 4 Other control strategies These slides are devoted to two advanced control approaches, namely Operational space control Interaction
More informationINCREMENTAL DISPLACEMENT ESTIMATION METHOD FOR VISUALLY SERVOED PARIED STRUCTURED LIGHT SYSTEM (ViSP)
Blucher Mechanical Engineering Proceedings May 2014, vol. 1, num. 1 www.proceedings.blucher.com.br/evento/10wccm INCREMENAL DISPLACEMEN ESIMAION MEHOD FOR VISUALLY SERVOED PARIED SRUCURED LIGH SYSEM (ViSP)
More informationTo appear in ECCV-94, Stockholm, Sweden, May 2-6, 1994.
To appear in ECCV-94, Stockholm, Sweden, May 2-6, 994. Recognizing Hand Gestures? James Davis and Mubarak Shah?? Computer Vision Laboratory, University of Central Florida, Orlando FL 3286, USA Abstract.
More informationA Fuzzy Local Path Planning and Obstacle Avoidance for Mobile Robots
A Fuzzy Local Path Planning and Obstacle Avoidance for Mobile Robots H.Aminaiee Department of Electrical and Computer Engineering, University of Tehran, Tehran, Iran Abstract This paper presents a local
More information