Development of a Fall Detection System with Microsoft Kinect

Size: px
Start display at page:

Download "Development of a Fall Detection System with Microsoft Kinect"

Transcription

1 Development of a Fall Detection System with Microsoft Kinect Christopher Kawatsu, Jiaxing Li, and C.J. Chung Department of Mathematics and Computer Science, Lawrence Technological University, West Ten Mile Road, Southfield, MI, , USA {ckawatsu,jli,cchung}@ltu.edu Abstract. Falls are the leading cause of injury and death among older adults in the US. Computer vision systems offer a promising way of detecting falls. The present paper examines a fall detection and reporting system using the Microsoft Kinect sensor. Two algorithms for detecting falls are introduced. The first uses only a single frame to determine if a fall has occurred. The second uses time series data and can distinguish between falls and slowly lying down on the floor. In addition to detecting falls, the system offers several options for reporting. Reports can be sent as s or text messages and can include pictures during and after the fall. A voice recognition system can be used to cancel false reports. 1 Introduction According to the Centers for Disease Control and Prevention [2], falls are the leading cause of injury and death for older adults, with one out of three adults 65 years and older falling each year. As a result, systems that detect falls have become a research topic of interest in recent years. A variety of different approaches have been used to detect falls. Wearable sensors that detect acceleration can be used to detect falls. This approach is examined by Noury et al. [4]. These systems have the drawback that the user must remember to wear the sensors. Floor mounted sensors can also be used. Alwan et al. [1] use floor mounted vibration sensors to detect falls. This system eliminates the need for the user to wear a sensor; however, these systems are often expensive and are complex to install. The present paper focuses on computer vision based systems. Fall detection systems using a variety of vision systems have been developed in recent years. Khan and Habib [3] developed a fall detection system which uses a single camera to detect falls. The system uses background subtraction to isolate the location of a person in the image. Motion gradient data is then used to determine if a fall has occurred. The drawback of this system is the motion gradient of a fall very near the camera will be much higher than that of a fall far from the camera. In order to solve this problem, the 3D rather than 2D location of the person must be known. In the present paper an affordable but reliable way to develop fall detection systems using the Microsoft Kinect sensor is introduced. Rougier et al. [5] have developed a similar system which also uses the Microsoft Kinect sensor. This system also implements both a position based and velocity based algorithm for detecting falls. However, J.-H. Kim et al. (Eds.): Robot Intelligence Technol. & Appl. 2012, AISC 208, pp DOI: / _59 c Springer-Verlag Berlin Heidelberg 2013

2 624 C. Kawatsu, J. Li, and C. Chung the present paper uses the locations of 21 joints during the computation while Rougier et al. use only the centroid location. Additionally, the present paper provides methods for reporting falls and reducing false positive reports. The paper is organized as follows. In Section 2 basic information about the Kinect sensor and the accompanying Standard Development Kit (SDK) is provided. Section 3 discusses the two algorithms used to detect falls. The voice recognition system used to validate falls is described in Section 4. Several methods of reporting falls are introduced in Section 5. 2 Kinect Overview The Kinect contains three types of sensors: a standard camera, an IR camera, and a microphone array. The IR camera detects points projected by a laser and automatically converts them into a depth map. The cameras are calibrated so that the depth map pixels correspond to the pixels in the standard camera images. The Kinect SDK is a free software package which provides a variety of useful tools. The software will automatically detect the 3D location of 21 joints for two people. No markers are required for the software to detect join locations. Information about the algorithm used by this software is provided in [6]. In addition to joint locations, the Kinect SDK also detects the location of the floor plane. 3 Fall Detection Algorithms We developed two algorithms to detect falls using the Kinect SDK. The first algorithm uses only joint position data. This algorithm calculates the distance from the floor to each joint. If the maximum distance is less than some threshold value, a fall is detected. The second algorithm calculates the velocity of each joint in the direction normal to the floor plane. The velocities are averaged over all joints and many frames. If this average velocity is lower (downward velocities are defined as negative) than some threshold value, a fall is detected. 3.1 Position Algorithm The Kinect SDK provides data in frames at a rate of 30 frames per second. Each frame is processed by the fall detection algorithm to determine if the state is fall or not fall. For each joint on a skeleton 4 pieces of data are acquired from the Kinect SDK: the x, y, and z coordinates of the joint and the tracking information for the joint. Joints can be tracked, not tracked, or inferred. The x, y, and z coordinates are Cartesian coordinates in meters with the Kinect sensor located at the origin. In addition to joint information, the equation of the floor plane (in the same coordinate system as the joints) is acquired. The Kinect SDK provides the plane information in the form of the A, B, C,andD parameters from Equation (1). Ax + By +Cz+ D = 0 (1)

3 Development of a Fall Detection System with Microsoft Kinect 625 In Cartesian space, the length of a vector normal to a plane ending at a point (x,y,z) can be calculated using [7] Ax + By +Cz+ D d = A 2 + B 2 +C. (2) 2 Using this relation the normal distance from the floor to each joint is obtained. The fall detection algorithm considers the distances only for joints which are tracked by the Kinect. If every tracked joint has a normal distance less than some threshold the algorithm sets the state to fall, otherwise not fall. Fig. 1. Maximum joint distance from floor plane is above the threshold 3.2 Velocity Algorithm The Kinect provides approximately 30 frames per second of data. From each frame we use the timestamp (in milliseconds) and the 3D Cartesian coordinate location of each joint. We also use the angle of the Kinect sensor which is assumed to not change throughout any calculations. Our algorithm also assumes that the Kinect is placed on a level surface. Instead of using the floor plane equation provided by the Kinect (this is not always detected, particularly on stairs) we calculate our own floor plane. If we assume the Kinect is on a level surface then we can calculate the floor plane equation from the angle of the sensor as follows: where Ax + By +Cz+ D = 0 A = 0, B = cosθ, C = sinθ, D = 3.

4 626 C. Kawatsu, J. Li, and C. Chung Fig. 2. Maximum joint distance from floor plane is below the threshold A, B,andC are simply the vector normal to the floor and D shifts the floor plane 3 meters below the Kinect. The distance from the floor plane can then be calculated using: d = Ax + By +Cz+ D A 2 + B 2 +C 2. For frame i and i + 1 the velocity for a particular joint normal to the floor is then: v i = 1000 d i+1 d i t i+1 t i Where t is the timestamp in milliseconds. The factor of 1000 allows us to work in more convenient units of meters per second instead of meters per millisecond. This velocity is averaged over N frames: v avg = 1000 N 1 d i+1 d i N 1. i=1 t i+1 t i If a joint is not tracked for frame i or i + 1, the velocity is not used and the value of N for the joint decreased. Finally we take v avg from all 20 joints and average again: v jointavg = j=1 v avg, j. If v jointavg is less than -1 meters per second the algorithm detects a fall. This algorithm has problems detecting falls in a few cases. First, if someone jumps in front of the camera it is detected as a fall. This occurs because the speed normal to

5 Development of a Fall Detection System with Microsoft Kinect 627 the floor is very high and the duration of the downward portion of the jump is about the same as a fall. Second, if a person walks out of the Kinect s vision range this is occasionally detected as a fall. The reason for this is because as the person walks off the camera, all of the tracked joints are shifted to the part of the person that is still visible to the Kinect s camera. For example, if only the lower half of a leg is visible to the Kinect, all 20 joints will be tracked within the area of the lower leg. This sometimes causes a very high downward velocity to be detected by the Kinect. Lastly, this algorithm does not perform very well on stairs. In order to detect falls on stairs the threshold velocity has to be lowered. Additionally, cases where someone falls forward when walking up stairs are very difficult to detect. The first problem is not really possible to fix aside from using a different algorithm; however, it is unlikely people using this system will be jumping in front of the camera. For the second case we can eliminate most of the false reports by ignoring cases where all of the joints are very close to the left or right side of the camera s vision. Detecting falls on stairs will most likely require a different algorithm. For example, we could measure the angle of a person s posture while walking on stairs. If this angle deviates greatly from the up direction we detect a fall. 4 Validation Voice recognition is used to reduce false positive reports. After a fall is detected, the event is validated using the Kinect microphone array and a voice recognition system. Once a fall is detected, a new thread is created to ask the user if the require assistance. The thread waits for a response of yes or no. In the case of a yes, a fall is reported. In the case of a no, the report is canceled. A timer is also set. If the timer ends without receiving a yes or no response, a fall is reported. Fig. 3. The GUI when no fall has occured (white border on image)

6 628 C. Kawatsu, J. Li, and C. Chung 5 Fall Reporting Falls are reported through . After the algorithm detects a fall, pictures are taken 15 and 60 frames after the event. These pictures are sent to a user defined address after passing the validation component. These pictures can also be sent to phones using Multimedia Messaging Service (MMS). Most mobile phone providers offer a service which forwards s with attached pictures as MMS messages. This method is free but requires the user to know the form of the address expected by the mobile phone provider. More robust services that only require a phone number are available; however these services are not free and charge for each message sent. Fig. 4. The GUI after a fall has been detected and validated (red border on image) Fig. 5. MMS messages sent by the system

7 6 Experimental Results Development of a Fall Detection System with Microsoft Kinect 629 The system has been tested quite extensively in our lab. All of the cases we observed where the system failed to detect or incorrectly reported a fall have been described in Section 3. One major concern is that a fall simulated in our lab may be significantly different from an actual fall. This could have a large impact on the velocity based algorithm. For example, if actual falls have a shorter duration or lower velocity than those recorded in our lab, the number of frames and threshold velocity would have to be adjusted. Fig. 6. Picture taken during the fall Fig. 7. Picture taken after the fall 7 Concluding Remarks The fall detection system provides an affordable way to detect and report falls. The system has also been tested with people using canes, crutches, and walkers and works reliably. The software is available for download from

8 630 C. Kawatsu, J. Li, and C. Chung References 1. Alwan, M., Rajendran, P., Kell, S., Mack, D., Dalal, S., Wolfe, M., Felder, R.: A smart and passive floor-vibration based fall detector for elderly. In: 2nd Information and Communication Technologies, vol. 1, pp (2006) 2. Centers for Disease Control and Prevention. Falls Among Older Adults: An Overview, Falls/adultfalls.html 3. Khan, M.J., Habib, H.A.: Video Analytic for Fall Detection from Shape Features and Motion Gradients. In: Proceedings of the World Congress on Engineering and Computer Science, WCECS 2009, San Francisco, USA, October 20-22, vol. II (2009) 4. Noury, N., Fleury, A., Rumeau, P., Bourke, A., Laighin, G., Rialle, V., Lundy, J.: Fall detection - principles and methods. In: 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp (2007) 5. Rougier, C., Auvinet, E., Rousseau, J., Mignotte, M., Meunier, J.: Fall detection from depth map video sequences. In: Abdulrazak, B., Giroux, S., Bouchard, B., Pigot, H., Mokhtari, M. (eds.) ICOST LNCS, vol. 6719, pp Springer, Heidelberg (2011) 6. Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, pp IEEE Computer Society, Washington, DC (2011) 7. Weisstein, E.W.: Point-Plane Distance. From MathWorld A Wolfram Web Resource,

CS Decision Trees / Random Forests

CS Decision Trees / Random Forests CS548 2015 Decision Trees / Random Forests Showcase by: Lily Amadeo, Bir B Kafle, Suman Kumar Lama, Cody Olivier Showcase work by Jamie Shotton, Andrew Fitzgibbon, Richard Moore, Mat Cook, Alex Kipman,

More information

Walking gait dataset: point clouds, skeletons and silhouettes

Walking gait dataset: point clouds, skeletons and silhouettes Walking gait dataset: point clouds, skeletons and silhouettes Technical Report Number 1379 Trong-Nguyen Nguyen * and Jean Meunier DIRO, University of Montreal, Montreal, QC, Canada September 8, 2018 Abstract

More information

Background subtraction in people detection framework for RGB-D cameras

Background subtraction in people detection framework for RGB-D cameras Background subtraction in people detection framework for RGB-D cameras Anh-Tuan Nghiem, Francois Bremond INRIA-Sophia Antipolis 2004 Route des Lucioles, 06902 Valbonne, France nghiemtuan@gmail.com, Francois.Bremond@inria.fr

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of

More information

Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots

Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots Vo Duc My Cognitive Systems Group Computer Science Department University of Tuebingen

More information

Real-Time Human Pose Recognition in Parts from Single Depth Images

Real-Time Human Pose Recognition in Parts from Single Depth Images Real-Time Human Pose Recognition in Parts from Single Depth Images Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, Andrew Blake CVPR 2011 PRESENTER:

More information

Articulated Pose Estimation with Flexible Mixtures-of-Parts

Articulated Pose Estimation with Flexible Mixtures-of-Parts Articulated Pose Estimation with Flexible Mixtures-of-Parts PRESENTATION: JESSE DAVIS CS 3710 VISUAL RECOGNITION Outline Modeling Special Cases Inferences Learning Experiments Problem and Relevance Problem:

More information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,

More information

A Robust Gesture Recognition Using Depth Data

A Robust Gesture Recognition Using Depth Data A Robust Gesture Recognition Using Depth Data Hironori Takimoto, Jaemin Lee, and Akihiro Kanagawa In recent years, gesture recognition methods using depth sensor such as Kinect sensor and TOF sensor have

More information

Fall Detection System for Elderly Based on Android Smartphone

Fall Detection System for Elderly Based on Android Smartphone Fall Detection System for Elderly Based on Android Smartphone Made Liandana, I Wayan Mustika, and Selo Department of Information Technology and Electrical Engineering Universitas Gadjah Mada Jalan Grafika

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information

An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow

An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow , pp.247-251 http://dx.doi.org/10.14257/astl.2015.99.58 An Improvement of the Occlusion Detection Performance in Sequential Images Using Optical Flow Jin Woo Choi 1, Jae Seoung Kim 2, Taeg Kuen Whangbo

More information

Data-driven Depth Inference from a Single Still Image

Data-driven Depth Inference from a Single Still Image Data-driven Depth Inference from a Single Still Image Kyunghee Kim Computer Science Department Stanford University kyunghee.kim@stanford.edu Abstract Given an indoor image, how to recover its depth information

More information

Motion capture: An evaluation of Kinect V2 body tracking for upper limb motion analysis

Motion capture: An evaluation of Kinect V2 body tracking for upper limb motion analysis Motion capture: An evaluation of Kinect V2 body tracking for upper limb motion analysis Silvio Giancola 1, Andrea Corti 1, Franco Molteni 2, Remo Sala 1 1 Vision Bricks Laboratory, Mechanical Departement,

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due

More information

Detection of a Hand Holding a Cellular Phone Using Multiple Image Features

Detection of a Hand Holding a Cellular Phone Using Multiple Image Features Detection of a Hand Holding a Cellular Phone Using Multiple Image Features Hiroto Nagayoshi 1, Takashi Watanabe 1, Tatsuhiko Kagehiro 1, Hisao Ogata 2, Tsukasa Yasue 2,andHiroshiSako 1 1 Central Research

More information

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

An Approach for Real Time Moving Object Extraction based on Edge Region Determination An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,

More information

Multiple cameras fall data set

Multiple cameras fall data set Multiple cameras fall data set Edouard Auvinet 1 Caroline Rougier 1 Jean Meunier 1 Alain St-Arnaud 2 Jacqueline Rousseau 3 1 University of Montreal, QC, Canada {auvinet, rougierc, meunier}@iro.umontreal.ca

More information

Person Detection and Head Tracking to Detect Falls in Depth Maps

Person Detection and Head Tracking to Detect Falls in Depth Maps Person Detection and Head Tracking to Detect Falls in Depth Maps Michal Kepski 2 and Bogdan Kwolek 1 1 AGH University of Science and Technology, 30 Mickiewicza Av., 30-059 Krakow, Poland bkw@agh.edu.pl

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

The Kinect Sensor. Luís Carriço FCUL 2014/15

The Kinect Sensor. Luís Carriço FCUL 2014/15 Advanced Interaction Techniques The Kinect Sensor Luís Carriço FCUL 2014/15 Sources: MS Kinect for Xbox 360 John C. Tang. Using Kinect to explore NUI, Ms Research, From Stanford CS247 Shotton et al. Real-Time

More information

A Kinect Sensor based Windows Control Interface

A Kinect Sensor based Windows Control Interface , pp.113-124 http://dx.doi.org/10.14257/ijca.2014.7.3.12 A Kinect Sensor based Windows Control Interface Sang-Hyuk Lee 1 and Seung-Hyun Oh 2 Department of Computer Science, Dongguk University, Gyeongju,

More information

Improving Vision-Based Distance Measurements using Reference Objects

Improving Vision-Based Distance Measurements using Reference Objects Improving Vision-Based Distance Measurements using Reference Objects Matthias Jüngel, Heinrich Mellmann, and Michael Spranger Humboldt-Universität zu Berlin, Künstliche Intelligenz Unter den Linden 6,

More information

Early versus Late Fusion in a Multiple Camera Network for Fall Detection

Early versus Late Fusion in a Multiple Camera Network for Fall Detection Early versus Late Fusion in a Multiple Camera Network for Fall Detection Sebastian Zambanini, Jana Machajdik and Martin Kampel Institute of Computer Aided Automation, Vienna University of Technology, Austria

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Evaluating Measurement Error of a 3D Movable Body Scanner for Calibration

Evaluating Measurement Error of a 3D Movable Body Scanner for Calibration Evaluating Measurement Error of a 3D Movable Body Scanner for Calibration YU-CHENG LIN Department of Industrial Engingeering and Management Overseas Chinese University No. 100 Chiaokwang Road, 408, Taichung

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.

More information

Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, Andrew Blake CVPR 2011

Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, Andrew Blake CVPR 2011 Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, Andrew Blake CVPR 2011 Auto-initialize a tracking algorithm & recover from failures All human poses,

More information

Coin Size Wireless Sensor Interface for Interaction with Remote Displays

Coin Size Wireless Sensor Interface for Interaction with Remote Displays Coin Size Wireless Sensor Interface for Interaction with Remote Displays Atia Ayman, Shin Takahashi, and Jiro Tanaka Department of Computer Science, Graduate school of systems and information engineering,

More information

A multi-camera positioning system for steering of a THz stand-off scanner

A multi-camera positioning system for steering of a THz stand-off scanner A multi-camera positioning system for steering of a THz stand-off scanner Maria Axelsson, Mikael Karlsson and Staffan Rudner Swedish Defence Research Agency, Box 1165, SE-581 11 Linköping, SWEDEN ABSTRACT

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

HAND GESTURE RECOGNITION USING MEMS SENSORS

HAND GESTURE RECOGNITION USING MEMS SENSORS HAND GESTURE RECOGNITION USING MEMS SENSORS S.Kannadhasan 1, R.Suresh 2,M.Shanmuganatham 3 1,2 Lecturer, Department of EEE, Tamilnadu Polytechnic College, Madurai, Tamilnadu, (India) 3 Senior Lecturer,

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

Human Detection, Tracking and Activity Recognition from Video

Human Detection, Tracking and Activity Recognition from Video Human Detection, Tracking and Activity Recognition from Video Mihir Patankar University of California San Diego Abstract - Human detection, tracking and activity recognition is an important area of research

More information

VIBRATION ISOLATION USING A MULTI-AXIS ROBOTIC PLATFORM G.

VIBRATION ISOLATION USING A MULTI-AXIS ROBOTIC PLATFORM G. VIBRATION ISOLATION USING A MULTI-AXIS ROBOTIC PLATFORM G. Satheesh Kumar, Y. G. Srinivasa and T. Nagarajan Precision Engineering and Instrumentation Laboratory Department of Mechanical Engineering Indian

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Tracking People. Tracking People: Context

Tracking People. Tracking People: Context Tracking People A presentation of Deva Ramanan s Finding and Tracking People from the Bottom Up and Strike a Pose: Tracking People by Finding Stylized Poses Tracking People: Context Motion Capture Surveillance

More information

Eyes extraction from facial images using edge density

Eyes extraction from facial images using edge density Loughborough University Institutional Repository Eyes extraction from facial images using edge density This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation:

More information

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical

More information

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION 2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

More information

BendIT An Interactive Game with two Robots

BendIT An Interactive Game with two Robots BendIT An Interactive Game with two Robots Tim Niemueller, Stefan Schiffer, Albert Helligrath, Safoura Rezapour Lakani, and Gerhard Lakemeyer Knowledge-based Systems Group RWTH Aachen University, Aachen,

More information

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen Abstract: An XBOX 360 Kinect is used to develop two applications to control the desktop cursor of a Windows computer. Application

More information

Kinect Joints Correction Using Optical Flow for Weightlifting Videos

Kinect Joints Correction Using Optical Flow for Weightlifting Videos 215 Seventh International Conference on Computational Intelligence, Modelling and Simulation Kinect Joints Correction Using Optical Flow for Weightlifting Videos Pichamon Srisen Computer Engineering Faculty

More information

Stereo Vision Image Processing Strategy for Moving Object Detecting

Stereo Vision Image Processing Strategy for Moving Object Detecting Stereo Vision Image Processing Strategy for Moving Object Detecting SHIUH-JER HUANG, FU-REN YING Department of Mechanical Engineering National Taiwan University of Science and Technology No. 43, Keelung

More information

Ceilbot vision and mapping system

Ceilbot vision and mapping system Ceilbot vision and mapping system Provide depth and camera data from the robot's environment Keep a map of the environment based on the received data Keep track of the robot's location on the map Recognize

More information

A Validation Study of a Kinect Based Body Imaging (KBI) Device System Based on ISO 20685:2010

A Validation Study of a Kinect Based Body Imaging (KBI) Device System Based on ISO 20685:2010 A Validation Study of a Kinect Based Body Imaging (KBI) Device System Based on ISO 20685:2010 Sara BRAGANÇA* 1, Miguel CARVALHO 1, Bugao XU 2, Pedro AREZES 1, Susan ASHDOWN 3 1 University of Minho, Portugal;

More information

A Real Time Human Detection System Based on Far Infrared Vision

A Real Time Human Detection System Based on Far Infrared Vision A Real Time Human Detection System Based on Far Infrared Vision Yannick Benezeth 1, Bruno Emile 1,Hélène Laurent 1, and Christophe Rosenberger 2 1 Institut Prisme, ENSI de Bourges - Université d Orléans

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor International Journal of the Korean Society of Precision Engineering Vol. 4, No. 1, January 2003. Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor Jeong-Woo Jeong 1, Hee-Jun

More information

Data-driven Approaches to Simulation (Motion Capture)

Data-driven Approaches to Simulation (Motion Capture) 1 Data-driven Approaches to Simulation (Motion Capture) Ting-Chun Sun tingchun.sun@usc.edu Preface The lecture slides [1] are made by Jessica Hodgins [2], who is a professor in Computer Science Department

More information

Graph Matching. LabQuest App OBJECTIVES MATERIALS

Graph Matching. LabQuest App OBJECTIVES MATERIALS Graph Matching LabQuest 1 One of the most effective methods of describing motion is to plot graphs of position, velocity, and acceleration vs. time. From such a graphical representation, it is possible

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

Computer and Machine Vision

Computer and Machine Vision Computer and Machine Vision Lecture Week 12 Part-2 Additional 3D Scene Considerations March 29, 2014 Sam Siewert Outline of Week 12 Computer Vision APIs and Languages Alternatives to C++ and OpenCV API

More information

Real-Time Continuous Action Detection and Recognition Using Depth Images and Inertial Signals

Real-Time Continuous Action Detection and Recognition Using Depth Images and Inertial Signals Real-Time Continuous Action Detection and Recognition Using Depth Images and Inertial Signals Neha Dawar 1, Chen Chen 2, Roozbeh Jafari 3, Nasser Kehtarnavaz 1 1 Department of Electrical and Computer Engineering,

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information

Data Association for SLAM

Data Association for SLAM CALIFORNIA INSTITUTE OF TECHNOLOGY ME/CS 132a, Winter 2011 Lab #2 Due: Mar 10th, 2011 Part I Data Association for SLAM 1 Introduction For this part, you will experiment with a simulation of an EKF SLAM

More information

Using Optical Flow for Stabilizing Image Sequences. Peter O Donovan

Using Optical Flow for Stabilizing Image Sequences. Peter O Donovan Using Optical Flow for Stabilizing Image Sequences Peter O Donovan 502425 Cmpt 400 Supervisor: Dr. Mark Eramian April 6,2005 1 Introduction In the summer of 1999, the small independent film The Blair Witch

More information

Model-Based Human Motion Capture from Monocular Video Sequences

Model-Based Human Motion Capture from Monocular Video Sequences Model-Based Human Motion Capture from Monocular Video Sequences Jihun Park 1, Sangho Park 2, and J.K. Aggarwal 2 1 Department of Computer Engineering Hongik University Seoul, Korea jhpark@hongik.ac.kr

More information

AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE

AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE Md. Akhtaruzzaman, Amir A. Shafie and Md. Raisuddin Khan Department of Mechatronics Engineering, Kulliyyah of Engineering, International

More information

Real Time Person Detection and Tracking by Mobile Robots using RGB-D Images

Real Time Person Detection and Tracking by Mobile Robots using RGB-D Images Real Time Person Detection and Tracking by Mobile Robots using RGB-D Images Duc My Vo, Lixing Jiang and Andreas Zell Abstract Detecting and tracking humans are key problems for human-robot interaction.

More information

DEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION. Ani1 K. Jain and Nicolae Duta

DEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION. Ani1 K. Jain and Nicolae Duta DEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION Ani1 K. Jain and Nicolae Duta Department of Computer Science and Engineering Michigan State University, East Lansing, MI 48824-1026, USA E-mail:

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

Sphero Lightning Lab Cheat Sheet

Sphero Lightning Lab Cheat Sheet Actions Tool Description Variables Ranges Roll Combines heading, speed and time variables to make the robot roll. Duration Speed Heading (0 to 999999 seconds) (degrees 0-359) Set Speed Sets the speed of

More information

Mirror Based Framework for Human Body Measurement

Mirror Based Framework for Human Body Measurement 362 Mirror Based Framework for Human Body Measurement 1 Takeshi Hashimoto, 2 Takayuki Suzuki, 3 András Rövid 1 Dept. of Electrical and Electronics Engineering, Shizuoka University 5-1, 3-chome Johoku,

More information

People detection and tracking using stereo vision and color

People detection and tracking using stereo vision and color People detection and tracking using stereo vision and color Rafael Munoz-Salinas, Eugenio Aguirre, Miguel Garcia-Silvente. In Image and Vision Computing Volume 25 Issue 6 (2007) 995-1007. Presented by

More information

Physics 2374 Lab 2: Graph Matching

Physics 2374 Lab 2: Graph Matching Physics 2374 Lab 2: Graph Matching One of the most effective methods of describing motion is to plot graphs of position, velocity, and acceleration vs. time. From such a graphical representation, it is

More information

A Robot Recognizing Everyday Objects

A Robot Recognizing Everyday Objects A Robot Recognizing Everyday Objects -- Towards Robot as Autonomous Knowledge Media -- Hideaki Takeda Atsushi Ueno Motoki Saji, Tsuyoshi Nakano Kei Miyamato The National Institute of Informatics Nara Institute

More information

A method for depth-based hand tracing

A method for depth-based hand tracing A method for depth-based hand tracing Khoa Ha University of Maryland, College Park khoaha@umd.edu Abstract An algorithm for natural human-computer interaction via in-air drawing is detailed. We discuss

More information

LAB 1: INTRODUCTION TO DATA STUDIO AND ONE-DIMENSIONAL MOTION

LAB 1: INTRODUCTION TO DATA STUDIO AND ONE-DIMENSIONAL MOTION Lab 1 - Introduction to Data Studio and One-Dimensional Motion 5 Name Date Partners LAB 1: INTRODUCTION TO DATA STUDIO AND ONE-DIMENSIONAL MOTION Slow and steady wins the race. Aesop s fable: The Hare

More information

Fall Detection with Two Cameras based on Occupied Area

Fall Detection with Two Cameras based on Occupied Area Fall Detection with Two Cameras based on Occupied Area Dao Huu Hung Hideo Saito In the effort of supporting elderly people living alone, this paper describes a novel video-based system for detecting fall

More information

A Vision System for Automatic State Determination of Grid Based Board Games

A Vision System for Automatic State Determination of Grid Based Board Games A Vision System for Automatic State Determination of Grid Based Board Games Michael Bryson Computer Science and Engineering, University of South Carolina, 29208 Abstract. Numerous programs have been written

More information

Appendix E: Software

Appendix E: Software Appendix E: Software Video Analysis of Motion Analyzing pictures (movies or videos) is a powerful tool for understanding how objects move. Like most forms of data, video is most easily analyzed using a

More information

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily

More information

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn

More information

Lab 4 Projectile Motion

Lab 4 Projectile Motion b Lab 4 Projectile Motion What You Need To Know: x = x v = v v o ox = v + v ox ox + at 1 t + at + a x FIGURE 1 Linear Motion Equations The Physics So far in lab you ve dealt with an object moving horizontally

More information

An Image Based Approach to Compute Object Distance

An Image Based Approach to Compute Object Distance An Image Based Approach to Compute Object Distance Ashfaqur Rahman * Department of Computer Science, American International University Bangladesh Dhaka 1213, Bangladesh Abdus Salam, Mahfuzul Islam, and

More information

Computer Vision for an Independent Lifestyle of the Elderly - An Overview of the MuBisA Project

Computer Vision for an Independent Lifestyle of the Elderly - An Overview of the MuBisA Project AALIANCE conference - Malaga, Spain - 11 and 12 March 2010 1 Computer Vision for an Independent Lifestyle of the Elderly - An Overview of the MuBisA Project Sebastian Zambanini*, Martin Kampel and Jana

More information

Automatic Enhancement of Correspondence Detection in an Object Tracking System

Automatic Enhancement of Correspondence Detection in an Object Tracking System Automatic Enhancement of Correspondence Detection in an Object Tracking System Denis Schulze 1, Sven Wachsmuth 1 and Katharina J. Rohlfing 2 1- University of Bielefeld - Applied Informatics Universitätsstr.

More information

Graph Matching. walk back and forth in front of Motion Detector

Graph Matching. walk back and forth in front of Motion Detector Graph Matching Experiment 1 One of the most effective methods of describing motion is to plot graphs of distance, velocity, and acceleration vs. time. From such a graphical representation, it is possible

More information

ANALYZING OBJECT DIMENSIONS AND CONTROLLING ARTICULATED ROBOT SYSTEM USING 3D VISIONARY SENSOR

ANALYZING OBJECT DIMENSIONS AND CONTROLLING ARTICULATED ROBOT SYSTEM USING 3D VISIONARY SENSOR ANALYZING OBJECT DIMENSIONS AND CONTROLLING ARTICULATED ROBOT SYSTEM USING 3D VISIONARY SENSOR Wael R. Abdulmajeed 1 and Alaa A. Hajr 2 1 Department of Mechatronics Engineering, Al-Khawarizmi Engineering

More information

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation 0 Robust and Accurate Detection of Object Orientation and ID without Color Segmentation Hironobu Fujiyoshi, Tomoyuki Nagahashi and Shoichi Shimizu Chubu University Japan Open Access Database www.i-techonline.com

More information

3D HAND LOCALIZATION BY LOW COST WEBCAMS

3D HAND LOCALIZATION BY LOW COST WEBCAMS 3D HAND LOCALIZATION BY LOW COST WEBCAMS Cheng-Yuan Ko, Chung-Te Li, Chen-Han Chung, and Liang-Gee Chen DSP/IC Design Lab, Graduated Institute of Electronics Engineering National Taiwan University, Taiwan,

More information

Fundamental Matrices from Moving Objects Using Line Motion Barcodes

Fundamental Matrices from Moving Objects Using Line Motion Barcodes Fundamental Matrices from Moving Objects Using Line Motion Barcodes Yoni Kasten (B), Gil Ben-Artzi, Shmuel Peleg, and Michael Werman School of Computer Science and Engineering, The Hebrew University of

More information

Procrustes Shape Analysis for Fall Detection

Procrustes Shape Analysis for Fall Detection Author manuscript, published in "The Eighth International Workshop on Visual Surveillance - VS2008, Marseille : France (2008)" Procrustes Shape Analysis for Fall Detection Caroline Rougier and Jean Meunier

More information

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Ta Yuan and Murali Subbarao tyuan@sbee.sunysb.edu and murali@sbee.sunysb.edu Department of

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

Summarization of Egocentric Moving Videos for Generating Walking Route Guidance

Summarization of Egocentric Moving Videos for Generating Walking Route Guidance Summarization of Egocentric Moving Videos for Generating Walking Route Guidance Masaya Okamoto and Keiji Yanai Department of Informatics, The University of Electro-Communications 1-5-1 Chofugaoka, Chofu-shi,

More information

Mouse Simulation Using Two Coloured Tapes

Mouse Simulation Using Two Coloured Tapes Mouse Simulation Using Two Coloured Tapes Kamran Niyazi 1, Vikram Kumar 2, Swapnil Mahe 3 and Swapnil Vyawahare 4 Department of Computer Engineering, AISSMS COE, University of Pune, India kamran.niyazi@gmail.com

More information

Globally Stabilized 3L Curve Fitting

Globally Stabilized 3L Curve Fitting Globally Stabilized 3L Curve Fitting Turker Sahin and Mustafa Unel Department of Computer Engineering, Gebze Institute of Technology Cayirova Campus 44 Gebze/Kocaeli Turkey {htsahin,munel}@bilmuh.gyte.edu.tr

More information

INTELLIGENT AUTONOMOUS SYSTEMS LAB

INTELLIGENT AUTONOMOUS SYSTEMS LAB Matteo Munaro 1,3, Alex Horn 2, Randy Illum 2, Jeff Burke 2, and Radu Bogdan Rusu 3 1 IAS-Lab at Department of Information Engineering, University of Padova 2 Center for Research in Engineering, Media

More information

Triangular Mesh Segmentation Based On Surface Normal

Triangular Mesh Segmentation Based On Surface Normal ACCV2002: The 5th Asian Conference on Computer Vision, 23--25 January 2002, Melbourne, Australia. Triangular Mesh Segmentation Based On Surface Normal Dong Hwan Kim School of Electrical Eng. Seoul Nat

More information

Information page for written examinations at Linköping University TER2

Information page for written examinations at Linköping University TER2 Information page for written examinations at Linköping University Examination date 2016-08-19 Room (1) TER2 Time 8-12 Course code Exam code Course name Exam name Department Number of questions in the examination

More information

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper): Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation

More information

Low Cost Motion Capture

Low Cost Motion Capture Low Cost Motion Capture R. Budiman M. Bennamoun D.Q. Huynh School of Computer Science and Software Engineering The University of Western Australia Crawley WA 6009 AUSTRALIA Email: budimr01@tartarus.uwa.edu.au,

More information