Measurement of Dense Static Point Cloud and Online Behavior Recognition Using Horizontal LIDAR and Pan Rotation of Vertical LIDAR with Mirrors

Size: px
Start display at page:

Download "Measurement of Dense Static Point Cloud and Online Behavior Recognition Using Horizontal LIDAR and Pan Rotation of Vertical LIDAR with Mirrors"

Transcription

1 Special Issue on SII 2012 SICE Journal of Control, Measurement, and System Integration, Vol. 7, No. 1, pp , January 2014 Measurement of Dense Static Point Cloud and Online Behavior Recognition Using Horizontal LIDAR and Pan Rotation of Vertical LIDAR with Mirrors Hiroshi NOGUCHI, Masato HANDA,RuiFUKUI, Masamichi SHIMOSAKA, Taketoshi MORI, Tomomasa SATO, and Hiromi SANADA Abstract : This paper proposes a new device to capture dense point cloud of human contour using horizontal LIDAR and vertical LIDAR with mirrors on pan rotational mechanism. This paper also reports a new method to estimate simple behavior such as standing and sitting online from only single vertical scan data. The combination of rotation mechanism and vertical LIDAR enlarge measurement area to overall room. Both mirrors use and pan rotation control enhance density of point cloud. The horizontal LIDAR provides robust tracking of human without missing. The experiments about performance of the device and the method demonstrated that our device and method contain sufficient feasibility to monitor human behavior in human living environments. The authors also demonstrated online recognition of simple human behavior such as standing and sitting from only single vertical scan points using pattern recognition technique. Key Words : behavior recognition, point cloud, rotational LIDAR, LIDAR with mirrors, human tracking. 1. Introduction Monitoring systems for the elderly people become important as elderly population increases. Typical monitoring systems employ simple sensors such as switch sensors and proximity sensors because these sensor are cheap and easy to introduce into the room environments [1],[2]. These sensors can measure in which room a inhabitant is located (e.g., in a lavatory or in a bath room) and can estimate such simple behaviors closely related to locations as sleeping and eating. The low-level behaviors (e.g., walking and sitting) and detail postures are also important to monitor human behaviors. The pattern change of low-level behaviors can detect life pattern change more sensitively than that of location-related behaviors. The measurement of the detail postures will estimate progressive weakening of physical body (e.g., whether an occupant stands up with or without leaning on surrounding objects). Thus, new devices that can detect these behaviors and postures are required for monitoring the elderly people. The camera systems are popular to capture human postures. The real room environments contain many obstacles, which cause occlusion for camera systems. In addition, the lighting conditions often change in the room environment. These conditions are also hard for camera systems. Currently, special cameras which can measure distance data are also often utilized. A Department of Life Support Technology (Molten), Graduate School of Medicine, The University of Tokyo, Med-5th Bldg., Hongo, Bunkyo-ku, Tokyo , Japan Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Eng- 2th Bldg., Hongo, Bunkyo-ku, Tokyo , Japan Department of Gerontological Nursing/Wound Care Management, Graduate School of Medicine, The University of Tokyo, Med-5th Bldg., Hongo, Bunkyo-ku, Tokyo , Japan hnogu-tky@umin.ac.jp (Received March 25, 2013) (Revised August 20, 2013) typical example is Kinect. This device can capture both 2D images and depth images to the objects. The depth data is strong for occlusion and illumination changes, which provides easy extraction of human contours. Field of views (FOV) of these special cameras are limited to small area. The small FOV causes human missing when the occupants moves to outside of the camera. The measurement depth of the camera is also short. The short depth measurement and small FOV need special mechanism or deployment of multiple devices to capture the human posture in the whole room. The special mechanisms such as 3D rotation of the camera and distributed deployment increase drastically device price and cost to introduction into the home environment. 3D LIDAR such as velodyne HDL-3D can measure directly point cloud in large area. Density of point cloud is fixed and low. Thus, these specifications are not suitable to capture detail human posture. These traditional devices are insufficient to capture human posture in home environment. Another promising approach is combination of 2D LIDAR and the rotation mechanism. 2D LIDAR is cheaper than 3D LIDAR and can measure long distance. This long measurement distance solves distance-related problems. The rotation mechanism provides large FOV. Therefore, a system which contains 2D LIDAR with rotation mechanism covers the whole room. The traditional system rotates LIDAR in 360 degrees in a uniform velocity. This 360-degree rotation needs long time duration to capture all 3D point cloud. The long rotation time causes distortion of human body contours if the occupant moves during rotation. The idea to speed up the rotation is adjustment of rotation range to focus on a inhabitant. The limited range of rotation reduces the rotation time. This control contains the other merit: it increases of point cloud density on a human contour. We design the 2D LIDAR vertically on the pan rotational mechanism to shorten rotation time and to increase horizontal density of point cloud. While the horizontal density of the point cloud becomes high easily because of rotation control, LIDAR specification limits JCMSI 0001/14/ c 2013 SICE

2 SICE JCMSI, Vol. 7, No. 1, January density on LIDAR scan plane. We employ mirrors to overcome this problem as some research works [3],[4]. The mirrors can change laser direction in the useless scan ranges (e.g., backward region). The reflection using mirror is regarded as addition of other LIDARs. The multiple virtual LIDARs enhance density of point cloud on static human posture. When an occupant does not move, the device can capture dense 3D point cloud on human static posture. In practice, the occupant moves in a room. The movement causes two problems; target human missing and contour distortion. As for the target human missing, the device with rotation control cannot track the occupant again once the device misses the target. We introduce another horizontal 2D LIDAR into the device to track human robustly. The horizontal contour captured by horizontal LIDAR not only prevents human missing but also assists rotation control. As for the contour distortion, since it is impossible to synchronize rotation speed with human movement speed, point cloud does not fit to the 3D human contour accurately during human movement. Fortunately, the human behaviors are limited to several behaviors such as walking during horizontally moving. Thus, capturing accurate point cloud is not required in moving. However, rough behavior recognition during movement is important to decide the strategy of pan rotation control. For example, if sitting behavior is detected, the device will focus on the areas of arms and hands to detect detail handling behaviors. We challenge to recognize the simple behaviors from only single vertical scan using pattern recognition method. The research purpose is the construction of a new device capturing dense point cloud on human body. The device consists of a horizontal LIDAR and a vertical LIDAR with mirrors on a pan rotation mechanism. The authors main contributions are 1) mechanism for increase of point cloud density using mirrors, 2) algorithm for human 3D contour measurement without missing person by combination of rotation control and human tracking using horizontal LIDAR, and 3) demonstration of online recognition for dynamic human behavior from single vertical scan data using pattern recognition technique. 2. Design and Implementation of Device Hardware 2.1 Rotation Methods of LIDAR Approaches about LIDAR with rotation mechanism are classified into 3 types based on LIDAR scan direction; horizontal, vertical and slant type (Fig. 1). Horizontal LIDAR is often utilized for recognition and localization of objects on a table. A typical example is PR2 1 -mounted LIDAR. Height of objects on the table is short. Thus, angle of pan rotation with horizontal LIDAR is smaller than angle of tilt rotation with vertical LIDAR. This approach has advantage for point cloud measurement on the objects. On the contrary, robots which navigates outdoor often employ vertical LIDAR with pan rotation mechanism due to localization and construction of detail 3D map. Vertical LIDAR with pan rotation is strong for self-occlusion and suitable to construct panoramic 3D distance view. Slant LI- DAR with pan rotation [5] is a special type of vertical LIDAR design. The vertical LIDAR captures useless points such as ceiling. The slant LIDAR, however, narrows scan area and de- 1 Fig. 2 Fig. 1 Methods for LIDAR rotation. Improvement of vertical point cloud density with mirrors. creases vertical gaps of 3D points, although the system using the slant LIDAR needs extra-calculation on point location. Relationship between tilt angle and density of cloud point is already analyzed [6]. As a special case, the device which contains LIDAR with complicated oscillation mechanism can capture dense 3D point cloud fast [7]. It is difficult for the device to control scanning speed and range to focus on the human. This mechanism is out of our device mechanism selection for human contour capturing. We adapt the vertical LIDAR with pan rotation. The human body contour is regarded as a tall cylinder as Fig. 1. The vertical LIDAR needs the smallest range of rotation to capture point cloud on the whole body. This approach provides another merit. 3D positions of points are easily calculated. Indexing of points is easy, which enables fast normal vector calculation and fast search for neighbor points. We design to match the center of vertical LIDAR with rotational axis of the pan rotational mechanism in order to make point cloud calculation easy and fast. 2.2 Mirror Design Vertical distance between neighbor points depends on angle resolution of vertical LIDAR. The resolution is fixed and limited by LIDAR specification. Improvement of resolution is fundamentally difficult. On the contrary, LIDARs can scan large range (e.g., 180 degrees and 270 degrees). Large scan range is unnecessary to capture human body contour. Thus, effective employment of unused scan range may cover vertical density of point cloud. We employ mirrors to change laser direction in the unused scan range. The reflection image using the mirrors is illustrated in Fig. 2. Use of mirrors is regarded as introduction of virtual LIDAR introduction as the figure. However, accurate design and implementation of reflected scan point to cover gaps among normal scan points is difficult. Thus, we consider only mirror use in design and calibrate the mirror position after implementation. The mirror reflection of scan point also increases horizontal density of point cloud because of synchronous pan rotation with vertical scanning as Fig. 3. Difference of point cloud with mirror tilt angles at the same mirror position is simulated as Fig. 4. The points in the same

3 14 SICE JCMSI, Vol. 7, No. 1, January 2014 Fig. 3 Improvement of horizontal point cloud density with mirrors. Fig. 5 Mirror arrangement design for vertical contour scan. Fig. 4 Point density with various mirror angle. distance from the LIDAR located at the center are sketched in the figure. This simple simulation shows large angle of mirror increases density of point cloud. However, the large mirror angle is impractical because the angle requires long mirror or short distance between LIDAR and mirrors to cover reflection laser beams. The long mirror increases weight of the device, which makes device imbalance and requires large size of rotational mechanism. The short distance between the LIDAR and the mirrors causes occlusion of scan laser by LIDAR itself. Thus, these condition limits mirror angle design. The commercially available LIDAR which satisfies long distance measurement and small size is limited. We use UTM- 30LX (Hokuyo Automatic Co., Ltd.) for vertical scan. Its scan angle resolution is 0.25 degrees. To capture detail human posture, our system should measure scan point on the finger. Diameter of the finger is about 10 mm. From LIDAR angle resolution, about 2.3 m is limitation of distance to satisfy the condition without mirror. Roughly speaking, if density of point cloud doubles, the limitation length becomes twice longer. This longed length covers room size. Therefore, we challenged to design the mirror angles to obtain twice more points than that without mirrors. Another limitation is height of device on the rotational mechanism. We decided maximum height is 300 mm from rotational mechanism performance specification and for safety. We also assume that the device is deployed at hip height by putting on the room objects such as tables or shelves because attachment of ceiling is difficult for safety problem and low height deployment causes occlusion of scan by obstacles. Based on these conditions, the distance from the LIDAR to the mirror and length of mirror is decided as Fig Device Implementation The implemented device and the specifications of used parts are shown in Fig. 6. These parts are connected to a single PC. Total size of the device is H450 W150 D 223 mm. The power of the device is supplied by AC 100V. The device is supposed to be deployed on objects (e.g., table and shelf) in a room or fixed to tripod at hip height. Accurate point cloud construction requires synchronization of 3 devices using special signal for synchronization. The algorithm for sensor synchronization is well know in robotics domain [8]. We implement the device software without the special Fig. 6 Implemented device and specification. synchronization mechanism. The synchronization using only PC software is sufficient for human posture detection because of the following reasons. Synchronization of horizontal LIDAR is not important because horizontal LIDAR data is only used for human tracking and decision of rotation control. Synchronization between pan rotational mechanism and the vertical LIDAR is important to capture accurate point cloud. However, accuracy of only relative position among point cloud is required to capture human posture. Relative distance between points is stable in single pan rotation if rotation speed is uniform. Therefore, PC-based synchronization satisfies capturing point cloud on human body. Usually speaking, for combination use of multiple LIDARs, the calibration between LIDARs is required. In our case, we ignore this problem from two reasons. Even if the LIDAR position is calibrated, the calculation of vertical scan points based on the calibration parameters needs large calculation cost, which disables online measurement. The other reason is that horizontal LIDAR is only used for tracking human and deciding the control angle of rotational mechanism. The horizontal LIDAR does not affect accurate measurement of point cloud, because the point cloud is captured using only vertical LIDAR. Thus, for scan point calculation, we utilized the designed position relation between the centers of horizontal LIDAR and vertical LIDAR for scan point calculation. 3. Software Implementation of Device 3.1 Calibration of Mirror-Reflected Points The reflected points using the mirrors is regarded as the points from virtual LIDARs. The relative position ( ) T x, T y and the relative rotation (R θ ) of the virtual LIDAR is needed to be estimated to calculate accurate 3D position of the reflected points. ICP is popular approach to calculate the relative position and rotation between two LIDARs. ICP is weak for outlier

4 SICE JCMSI, Vol. 7, No. 1, January Fig. 8 Model design of particle filter. Fig. 7 Calibration of virtual LIDAR. points and cannot be utilized if the number of common points is small. We develop the new method for LIDAR calibration using detected lines. First, large plane is placed on the device several times for calibration. Each LIDAR i detects the plane as a line as Eq. (1) with three parameters (a(i), b(i), c(i)). We employed RANSAC for this line detection. RANSAC is a kind of robust estimation. The iteration count is 600 in RANSAC. a(i)t x + b(i)t y + c(i) = 0. (1) The distance error between LIDAR 1 and LIDAR 2 at the j-th plane is defined as Eq. (2). The rotation error between normal vectors of estimated lines is also defined as Eq. (3). The relative position and rotation of LIDAR are calculated by minimizing the summary of the distance error ɛ h and the rotation error ɛ θ using least square method. φ is angle of normal vector. The relationship between parameters is illustrated in Fig. 7. ɛ hj = a(1) jt x + b(1) j T y + c(1) j a(1) 2 j + b(2)2 j c(2) j (2) a(2) 2 j + b(2)2 j ɛ θ j = φ 1 j + R θ φ 2 j (3) ɛ h = ɛhj 2 (4) j ɛ θ = ɛθ 2 j. (5) j 3.2 Control Method of Rotational Mechanism Human tracking using horizontal LIDAR We utilize a particle filter to track human using horizontal LIDAR. The particle filter is a kind of Bayesian filtering. The filter is often utilized for mobile robot localization and human tracking based on LIDARs because filtering model design is flexible enough to treat with LIDAR scan data. The device captures background scan data in advance. In human tracking, the device detects human contour by subtracting from the background points. Finally, the device applies the filter to the extracted human contour points. The device allocates the single tracker to the single person as a tracker. The tracker starts when the human contour is detected in the given area. The tracker is removed when the tracker missed the human contour in several frames. In the filter, the state X t is 2D position. We construct transition model considering uniform linear uniform movement of indoor human. The model contains particle spreading mechanism to adapt sudden movement of the inhabitant. Equation (6) represents transition model. The model is illustrated as Fig. 8-A). Δt is scan time of the horizontal LIDAR. v is mean of human movement velocity. θ is mean direction of human movement. ω is a variable related to particle spreading. v and θ are calculated as Eq. (2). The mean of values in position history {Xt T,, X t } is used as a value at time t to smooth the value. The time duration T is decided as 1 s empirically. ω is calculated from Eq. (8). This variable changes based on human movement speed. In other words, the particles spread widely based on large ω, if the occupant moves fast, to prevent missing evaluation of particles. The particle moves in short distance based on small ω, if the inhabitant does not move, to improve position estimation accuracy. We defined α = 100 empirically and used uniform random function for r : r [0, 1]. X = X (x) t 1 +Δt v t 1 cos θ t 1 + ω x X (y) t 1 +Δt v t 1 sin θ t 1 + ω y X v t = t X t 1 Δt X θ t = tan (y) 1 t X (x) t X (y) t 1 X (x) t 1 (6) (7) ω = (Δt v t 1 + α) r. (8) The horizontal LIDAR detects human hip contour because of device height. We design observation model by regarding human contour as a circle. The human contour is fundamentally represented as a ellipse. However, captured contour using a single LIDAR is incomplete. Fitting incomplete contour points to a ellipse is difficult. The ellipse equation requires 5 parameters while circle equation needs 3 parameters. Generally speaking, increase of parameters requires many data samples. In other words, a few samples lead unstable estimation. Circle fitting is more stable than ellipse fitting. The ellipse model or complex shape model from LIDAR scan data is used for human direction estimation. In our case, the direction estimation is unnecessary because 3D contour from vertical LIDAR gives more detail information than horizontal contour. Thus, the circle model is sufficient. The weight of particle π is calculated from observation model (Eqs. (9) and (10)). λ is related to variance of circle fitting and depends on many conditions such as body shape and clothes. We define λ = 50 empirically. d i is distance between the position represented by a particle and single scan points p i R is radius of a human hip part and defined as R = 160 mm. The equation means that weights of particles become larger when distance between particle position and each point is the similar to R. The distorted contours cause mistaking of position estimation because weights of particles are unstable. Especially, the outlier points near the device causes shift of position estimation. To prevent this problem, we introduce the penalty (weight of particle becomes zero) to the particles whose position is between LIDAR and human contour as Fig. 8-B). π = exp ( E/λ 2) (9)

5 16 E = d i R. (10) i Control of pan rotation mechanism The pan rotational mechanism is controlled based on estimated human location. The most simple approach is controlling the pan rotation mechanism to minimize difference between current pan angle and direction to estimated human position. The approach cannot capture 3D whole contour of human body. The control approach is desired to satisfy both robust tracking and capturing whole body contour. The combination of servo control to human and swing control according to body width is required. The control depends on human direction, distance and velocity. The optimization of control is difficult. Thus, we focus on only distance between the device and the human because the distance mainly depends on swing width. The swing angle is calculated as θ d = tan 1 (1/2d) based on both the measured distance d (m) and the knowledge that the body width with outward-moved arms is known as about 1 m. In other words, swing angle is within [θ h + θ d /2,θ h θ d /2], where θ h is absolute angle between the device and the human in horizontal plane. The pan rotation mechanism is controlled to the swing angle range as a target. The control algorithm is shown in Algorithm 1. The pan velocity is defined empirically. SICE JCMSI, Vol. 7, No. 1, January 2014 Algorithm 1 Control Algorithm for Pan Rotation 1: while Human Tracking do 2: Calculate swing range θ d from the estimated human position 3: if pan rotation angle is within swing range then 4: Select which difference between the current pan angle and θ h + θ d /2orθ h θ d /2 is large as a target angle to be controlled 5: Control with low speed 15 /s to the target angle 6: else 7: Select which difference between the current pan angle and θ h + θ d /2orθ h θ d /2 is small as a target angle to be controlled 8: Control with high speed 60 /s to the target angle 9: end if 10: end while Fig. 9 Flow of behavior recognition steps. 3.3 Behavior Recognition from Single Vertical Scan Data We develop a method to recognize the simple behaviors as Fig. 9. The method extracts human contour from single vertical scan. The feature vectors are constructed from the human contour. The behavior is recognized as multi-class pattern recognition problem from the extracted feature vectors. There are many pattern recognition techniques. In this paper, our research goal is not to develop a new technique but to demonstrate that the system can recognize dynamic behavior from only single vertical scan line data. Therefore, we selected popular technique to recognize human behavior from silhouette image; shape context [9] for feature extraction and SVM [10] for classification. The device scans overall the room for the background contour measurement in advance. The captured points are projected into 3D grid map whose cell size is 10 mm. In extraction of foreground scan points, the captured scan points are projected into 3D grid map. If the occupied cell is equivalent to the projected cell, the scan point is regarded as background point. Finally, the foreground points are regarded as human contour points. This approach sometimes includes the noise scan points. To capture accurate human, the method eliminates the foreground scan point whose position is more than 300 mm far from the other foreground scan points. For feature calculation, shape context is utilized. The shape context is known as a descriptor of geometrical feature of contour. The contour data are applied by log-polar transformation with the fixed center point. The transformed points are voted into the certain areas. The histogram is generated based on the voted values. Since the 3D scan points are sparsely distributed in 3D space, we convert the scan point before calculation of shape context. The method projects the 3D scan points into 2D plane (the left part of Fig. 10) and shifts the projected point to the center horizontally to remove the effect that the contour size depends on the distance from the device (the middle of Fig. 10). For the parameter, the five concentric circles are de- Fig. 10 Calculation flow from 3D scan points to features. fined in 0.2 m intervals. The circle are divided into 12 pies in 30 degrees (the right part of Fig. 10). The total dimensions of the features are 60. SVM is used for behavior classification. SVM is known to contain high-generality for classification. For implementation, libsvm 2 with the sigmoid kernel is used. Since the number of scan points is small and calculation cost of shape context and SVM is also small, our method can recognize human behaviors online. How to collect data set for training is one of important points for application using pattern recognition techniques. Although annotation of behavior label after natural behavior collection is the best solution, this task is troublesome and time-consuming. To avoid the task, scan data for static posture is used for training data, because the good feature extractor (in this case, shape context) will cover difference of contour between dynamic movement case and static case. Example of method to obtain training data is shown in section Experiment for Device Performance Evaluation 4.1 Performance Experiment for Point Cloud Measurement Experiment for accuracy of distance measurement We conducted the experiment to measure accuracy of distance measurement using our device. The point cloud of white wall in 1.5 m front of the device was measured as Fig. 11. In the figure, the green points mean reflected points from the upper mirror and the blue points represent reflected points from the lower mirror. The average of distances from the device to the wall points 2 cjlin/libsvm/

6 SICE JCMSI, Vol. 7, No. 1, January Fig. 11 Experiment condition and result of distance measurement accuracy. Fig. 13 Point clouds with various postures. Fig. 12 Comparison of point clouds with/without mirrors. including reflection points was m, which means that error average was 7 mm. Rate of points whose error distances were within 20 mm was 87% in total points. The maximum error distance was within 40 mm. The result demonstrated that our device can measure accurate point clouds. The result also showed that both our calibration method of virtual LIDARs and synchronization of vertical scan and pan rotation work well Static posture measurement Static posture was measured using our device. the pan rotation speed was slow, 15 deg/s, to capture detail contour of human body. The horizontal and vertical distances between neighbor points were equivalent in this pan speed. The height of the device was 1 m. The points were subtracted by background point captured in advance. Figure 12 shows point clouds at the time when the arms moved upward. The number of points is also shown in the figure. The number of points including reflected points was 1.81 times larger than the points without mirrors. The enlarged view shows captured points were arranged in equal interval vertically and horizontally. Point clouds at other postures are shown in Fig. 13. The device captured accurate contour of arms and legs, even if the radii of the arms and the legs were small. 4.2 Experiment about Human Tracking and Pan Control We evaluated performance about human tracking using horizontal LIDAR. A subject walked along a circle in the experiment. A motion capture system (Opti-track) measured positions of human head as a reference data. The relationship between the pan rotation angle and the angle from the device to the estimated human position was also measured to confirm performance of pan rotation control. In addition, the number of points on the body using vertical LIDAR was calculated to demonstrate that human contour was captured without missing. The points on the body was also calculated by subtracting the background scan point measured in advance. The result about human tracking is shown in Fig. 14. A part Fig. 14 Comparison of tracking position with motion capture. of reference position was not measured due to outside of measurement area. The graph illustrates our device tracked human position accurately. The average error was 272 mm. The graph also indicates the position error tended to become large where human was located near the device. The device captured points of human side contour, which was smaller than the body front contour. Our observation model forces to fit the circle to this small contour. Thus, the fitting may lead to wrong estimation of position. Fortunately, the error tended to be large in the depth direction from the device. This error of depth direction may be covered with swing control of pan rotation. The result about estimated angle and pan rotation angle is shown in Fig. 15. The figure also shows the number of points captured as a human contour using the vertical LIDAR. The graph demonstrates that the pan rotation mechanism was controlled as swinging at the center of estimated human angle. The control algorithm is simple but achieved both robust tracking and capturing the whole contour. The number of points also changed according to pan rotation swinging because vertical scan line passed different body parts with swing motion. The small number means that the vertical LIDAR sometimes captured the edge of the body. The fixed and large swing angle caused this problem. The improvement of swing angle control

7 18 SICE JCMSI, Vol. 7, No. 1, January 2014 Fig. 15 Comparison between device angle control and estimated angle. Table 1 Recognition result for static posture recognition. precision recall F-value Standing 0.82(0.81) 0.95(0.94) 0.88(0.87) Sitting on chair 0.84(0.82) 0.95(0.93) 0.88(0.87) Sitting on floor 0.81(0.80) 0.97(0.97) 0.88(0.88) Reaching 0.82(0.81) 0.95(0.93) 0.88(0.88) Squatting 0.82(0.80) 0.96(0.96) 0.88(0.87) mean 0.82(0.81) 0.95(0.94) 0.88(0.87) Fig. 16 Condition and protocol for dynamic human movement experiment. based on captured horizontal contour and direction estimation may solve this problem. 4.3 Behavior Recognition without Movement To simply the recognition problem, we evaluated performance of recognition of static posture. A subject was asked to perform 5 postures: standing, sitting on a chair, sitting on a floor, reaching, and squatting. The device whose height was 1.1 m was located in 1.5 m front of the subject. The subject changed the direction in 45 degrees. Total 8 directions are captured by the device. In one capturing, the device swinged at 30 degree/sec. After scan data collection, the scan lines excluding human contour were eliminated by hand. Finally, 300 scan lines per one posture were captured for the experiment. For evaluation, we calculated precision, recall and F-value (harmonic mean of precision and recall). Ten-cross validation was conducted. The result is shown in Table 1. The parenthetic number means the value that was calculated from the points including reflected points. The result shows that the method recognized the behaviors accurately. Since the contour of the five behaviors differed with each other drastically, the method achieved high-accurate performance based on only single vertical scan data. This means that our method has feasibility to recognize simple behaviors. The performance with the reflection points was similar to the performance without the reflection points. This means that the number of scan points without reflection was large enough to express these five behaviors. The reflected points and other 3D feature may assist to recognize detail behavior because the reflection points contain horizontal movement. 4.4 Point Cloud Measurement and Behavior Recognition under Dynamic Human Movement Finally, we evaluated our device to confirm feasibility of our device under dynamic human movement. The size of experiment room was 4 m 5 m. The device was deployed at the edge of the room at 1.1 m height. In the experiment, the subject entered the room, sit on the chair, reached the object, squatted and sit on the floor under movement (Fig. 16). The point cloud was captured in single swing of pan rotation mechanism. The device Fig. 17 Typical scenes in dynamic human movement experiment. tracked robustly the subject during the experiment. Although the behavior except standing distorted horizontal contour, the subject was not missed. The method recognized the behavior at each vertical scanning in parallel with capturing point cloud. The training data for behavior recognition was the same as the experiment at Sec The captured human contour and estimated behaviors are shown in Fig. 17. The device captured point cloud although the human contour was distorted. The behaviors were accurately recognized under dynamic movement as the figure. In the experiment, walking behavior was recognized as standing behavior because the walking behavior was excluded from training behaviors and contained both standing and horizontal movement. Although the training data was captured at static postures, the device detected the postures. This is because train-

8 SICE JCMSI, Vol. 7, No. 1, January ing data contained several direction data and both the feature and the classifier included generality for simple posture recognition. The experiment showed that our device has feasibility to monitor human behavior indoor environment. 5. Conclusion The authors constructed a new device to capture point cloud for detail human posture measurement. The device consists of horizontal LIDAR and vertical LIDAR which is mounted on the pan rotational mechanism with mirrors. The device can capture accurate point cloud on human body in large area. The device can also capture dense point cloud using human-focusing control of pan rotation and mirrors. We challenged to recognize simple human behavior such as standing and sitting online from single vertical scan using pattern recognition technique. The experiments demonstrated that the proposed device and method contains feasibility to capture dense point cloud and to estimate human behavior in room environment. In the future work, the authors will develop detail posture estimation method from both simple posture recognition result and captured dense 3D point cloud. References [1] S.S. Intille, K. Larso, E.M. Tapia, J.S. Beaudin, P. Kaushik, J. Nawyn, and R. Rockinson: Using a live-in laboratory for ubiquitous computing research, Proc. Pervasive 2006, LNCS 3968, pp , [2] T. Mori, T. Ishino, H. Noguchi, M. Shimosaka, and T. Sato: Anomaly detection and life pattern estimation for the elderly based on categorization of accumulated data, Proc. International Symposium on Computational Models for Life Sciences (CMLS-11), pp , [3] D. Brscic and H. Hashimoto: Extension of laser range finder functionality using mirrors, Proc. JSME Conference on Robotics and Mechatronics (ROBOMEC), pp. 2P1-G16(1) 2P1-G16(3), [4] Y. Nohara, T. Hasegawa, and K. Murakami: Floor sensing system using laser range finder and mirror for localizing daily life commodities, Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp , [5] K. Ohno, T. Kawahara, and S. Tadokoro: Development of 3D laser scanner for measuring uniform and dense 3D shapes of static objects in dynamic environment, Proc. IEEE International Conference on Robotics and Biomimetics, pp , [6] A. Desai and D. Huber: Objective evaluation of scanning ladar configurations for mobile robots, Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, pp , [7] M. Matsumoto and S. Yuta: 3D laser range sensor module with roundly swinging mechanism for fast and wide view range image, Proc. IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp , [8] E. Olson: A passive solution to the sensor synchronization problem, Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, pp , [9] S. Belongie, J. Malik, and J. Puzicha: Shape matching and object recognition using shape contexts, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 4, pp , [10] C. Cortes and V. Vapnik: Support-vector networks, Machine Learning, Vol. 20, No. 3, pp , Hiroshi NOGUCHI He received the Ph.D. degree in Information Science and Technology from The University of Tokyo, in He was a Research Associate at Graduate School of Information Science and Technology of The University of Tokyo and a researcher of CREST Japan Science and Technology Agency from 2006 to He is now a Project Assistant Professor of Department of Life Support Technology (Molten). His research interests include sensor network, robot middleware, smart home, software engineering, pervasive computing and nursing engineering. He is a member of the Robotics Society of Japan, the Japan Society of Mechanical Engineers (JSME) and IEEE. Masato HANDA He received the B.S. degree from Shibaura Institute of Technology. He received the M.S. degree from The University of Tokyo in His research interests include enforced learning and human behavior monitoring system. Rui FUKUI He received bachelor and master degrees from The University of Tokyo in 2002 and 2004, respectively. From April 2004 to March 2006, he was an engineer of Special Vehicle Designing Section, Mitsubishi Heavy Industries, Ltd. From April 2008 to March 2009, he was a Research Fellow (DC2) of Japan Society for the Promotion of Science (JSPS). In 2009, he received Ph.D. (Information Science and Technology) from The University of Tokyo. From April 2009 to March 2011, he was a Project Assistant Professor of Department of Mechano-Informatics, The University of Tokyo. Since April 2011, he has been an Assistant Professor of the department. He has been working on human symbiotic robot system, human monitoring sensors and mechanism design. He is a member of the Japan Society of Mechanical Engineers (JSME), IEEE Robotics and Automation Society, the Japan Society for Technology of Plasticity and the Robotics Society of Japan. Masamichi SHIMOSAKA He received the Ph.D. in Information Engineering from The University of Tokyo, in He was a Research Associate of The University of Tokyo. From 2007, he was an Assistant Professor at the university. Now, he is an Assistant Professor (Full-time lecturer) at the university. His research interests include action recognition, behavior monitoring, motion tracking, human modeling, intelligent data analysis, computer vision, machine learning, and data mining. He is an active member of the Robotics Society of Japan, the Japanese Society of Artificial Intelligence, the Institute of Electronics, Information and Communication Engineers, and IEEE. Taketoshi MORI (Member) He received the Ph.D. in Information Engineering from The University of Tokyo, in From 1995 to 1998, he was a Research Associate at Research Center for Advanced Science and Technology of The University of Tokyo. From 1998, he was an Assistant Professor at the university. Now, he is an Associate Professor at the university. His research interests include pervasive sensing and computing, recognition, robot vision, image processing including hardware design. He is an active member of the Robotics Society of Japan, the IEEE Robotics and Automation Society, the IEEE Computer Society, the Japan Society of Mechanical Engineers, the Institute of Electronics, Information and Communication Engineers, and Japanese Society for Artificial Intelligence.

9 20 SICE JCMSI, Vol. 7, No. 1, January 2014 Tomomasa SATO He received the B.S., M.S. and Ph.D. degrees in mechanical engineering from The University of Tokyo, Japan, in 1971, 1973 and 1976, respectively. Since 1976, he has been with the Electrotechnical Laboratory (ETL) of the Ministry of Industrial Science and Technology. In 1991, he moved to the Research Center for Advanced Science and Technology (RCAST) of The University of Tokyo. From 1998, he is a Professor of the Department of Mechano- Infomatics of the university. His current research interests include intelligent machine, human symbiosis robot and environmental type robot system. He is an active member of the IEEE Robotics and Automation Society, the Japan Society of Mechanical Engineers, the Japan Society for Artificial Intelligence, and the Robotics Society of Japan. Hiromi SANADA She received the Ph.D. in Medicine from Kanazawa University in She was a Professor of Kanazawa University from 1998 to Now, she is an Professor of The University of Tokyo. She is a nurse, Certificated WOC nurse and E.T. nurse. Her research interests include wound care management and gerontological nursing. She is a president of Japanese Society of Wound, Ostomy, and Continence Management and Japan Society of Pressure Ulcer.

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,

More information

Floor Sensing System Using Laser Range Finder and Mirror for Localizing Daily Life Commodities

Floor Sensing System Using Laser Range Finder and Mirror for Localizing Daily Life Commodities The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Floor Sensing System Using Laser Range Finder and for Localizing Daily Life Commodities

More information

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing

More information

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has

More information

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment Matching Evaluation of D Laser Scan Points using Observed Probability in Unstable Measurement Environment Taichi Yamada, and Akihisa Ohya Abstract In the real environment such as urban areas sidewalk,

More information

Detection of Small-Waving Hand by Distributed Camera System

Detection of Small-Waving Hand by Distributed Camera System Detection of Small-Waving Hand by Distributed Camera System Kenji Terabayashi, Hidetsugu Asano, Takeshi Nagayasu, Tatsuya Orimo, Mutsumi Ohta, Takaaki Oiwa, and Kazunori Umeda Department of Mechanical

More information

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical

More information

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm Yuji

More information

Moving Objects Detection and Classification Based on Trajectories of LRF Scan Data on a Grid Map

Moving Objects Detection and Classification Based on Trajectories of LRF Scan Data on a Grid Map Moving Objects Detection and Classification Based on Trajectories of LRF Scan Data on a Grid Map Taketoshi Mori, Takahiro Sato, Hiroshi Noguchi, Masamichi Shimosaka, Rui Fukui and Tomomasa Sato Abstract

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi

More information

Localization algorithm using a virtual label for a mobile robot in indoor and outdoor environments

Localization algorithm using a virtual label for a mobile robot in indoor and outdoor environments Artif Life Robotics (2011) 16:361 365 ISAROB 2011 DOI 10.1007/s10015-011-0951-7 ORIGINAL ARTICLE Ki Ho Yu Min Cheol Lee Jung Hun Heo Youn Geun Moon Localization algorithm using a virtual label for a mobile

More information

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING Kenta Fukano 1, and Hiroshi Masuda 2 1) Graduate student, Department of Intelligence Mechanical Engineering, The University of Electro-Communications,

More information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 2015) Barcelona, Spain July 13-14, 2015 Paper No. 335 Efficient SLAM Scheme Based ICP Matching Algorithm

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

CNN-based Human Body Orientation Estimation for Robotic Attendant

CNN-based Human Body Orientation Estimation for Robotic Attendant Workshop on Robot Perception of Humans Baden-Baden, Germany, June 11, 2018. In conjunction with IAS-15 CNN-based Human Body Orientation Estimation for Robotic Attendant Yoshiki Kohari, Jun Miura, and Shuji

More information

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE Hongyu Liang, Jinchen Wu, and Kaiqi Huang National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science

More information

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,

More information

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate

More information

Intelligent Robotics

Intelligent Robotics 64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural

More information

People Tracking for Enabling Human-Robot Interaction in Large Public Spaces

People Tracking for Enabling Human-Robot Interaction in Large Public Spaces Dražen Brščić University of Rijeka, Faculty of Engineering http://www.riteh.uniri.hr/~dbrscic/ People Tracking for Enabling Human-Robot Interaction in Large Public Spaces This work was largely done at

More information

Efficient Acquisition of Human Existence Priors from Motion Trajectories

Efficient Acquisition of Human Existence Priors from Motion Trajectories Efficient Acquisition of Human Existence Priors from Motion Trajectories Hitoshi Habe Hidehito Nakagawa Masatsugu Kidode Graduate School of Information Science, Nara Institute of Science and Technology

More information

Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots

Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots Real Time Face Detection using Geometric Constraints, Navigation and Depth-based Skin Segmentation on Mobile Robots Vo Duc My Cognitive Systems Group Computer Science Department University of Tuebingen

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

Task analysis based on observing hands and objects by vision

Task analysis based on observing hands and objects by vision Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Vehicle Occupant Posture Analysis Using Voxel Data

Vehicle Occupant Posture Analysis Using Voxel Data Ninth World Congress on Intelligent Transport Systems, Chicago, Illinois, October Vehicle Occupant Posture Analysis Using Voxel Data Ivana Mikic, Mohan Trivedi Computer Vision and Robotics Research Laboratory

More information

CSE/EE-576, Final Project

CSE/EE-576, Final Project 1 CSE/EE-576, Final Project Torso tracking Ke-Yu Chen Introduction Human 3D modeling and reconstruction from 2D sequences has been researcher s interests for years. Torso is the main part of the human

More information

Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration

Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration Darius Burschka, Stephen Lee and Gregory Hager Computational Interaction and Robotics Laboratory Johns Hopkins University

More information

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving 3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving Quanwen Zhu, Long Chen, Qingquan Li, Ming Li, Andreas Nüchter and Jian Wang Abstract Finding road intersections in advance is

More information

Processing of distance measurement data

Processing of distance measurement data 7Scanprocessing Outline 64-424 Intelligent Robotics 1. Introduction 2. Fundamentals 3. Rotation / Motion 4. Force / Pressure 5. Frame transformations 6. Distance 7. Scan processing Scan data filtering

More information

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS M. Hassanein a, *, A. Moussa a,b, N. El-Sheimy a a Department of Geomatics Engineering, University of Calgary, Calgary, Alberta, Canada

More information

3D Digitization of a Hand-held Object with a Wearable Vision Sensor

3D Digitization of a Hand-held Object with a Wearable Vision Sensor 3D Digitization of a Hand-held Object with a Wearable Vision Sensor Sotaro TSUKIZAWA, Kazuhiko SUMI, and Takashi MATSUYAMA tsucky@vision.kuee.kyoto-u.ac.jp sumi@vision.kuee.kyoto-u.ac.jp tm@i.kyoto-u.ac.jp

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Glass Confidence Maps Building Based on Neural Networks Using Laser Range-finders for Mobile Robots

Glass Confidence Maps Building Based on Neural Networks Using Laser Range-finders for Mobile Robots Glass Confidence Maps Building Based on Neural Networks Using Laser Range-finders for Mobile Robots Jun Jiang, Renato Miyagusuku, Atsushi Yamashita, and Hajime Asama Abstract In this paper, we propose

More information

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016 Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016 Outdoor Robot Navigation Based on View-based Global Localization and Local Navigation Yohei Inoue, Jun Miura, and Shuji Oishi Department

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor Takafumi Taketomi, Tomokazu Sato, and Naokazu Yokoya Graduate School of Information

More information

3D Sensing and Mapping for a Tracked Mobile Robot with a Movable Laser Ranger Finder

3D Sensing and Mapping for a Tracked Mobile Robot with a Movable Laser Ranger Finder 3D Sensing and Mapping for a Tracked Mobile Robot with a Movable Laser Ranger Finder Toyomi Fujita Abstract This paper presents a sensing system for 3D sensing and mapping by a tracked mobile robot with

More information

Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners

Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners Xiao Zhang, Wenda Xu, Chiyu Dong, John M. Dolan, Electrical and Computer Engineering, Carnegie Mellon University Robotics Institute,

More information

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission

More information

Cooperative Conveyance of an Object with Tethers by Two Mobile Robots

Cooperative Conveyance of an Object with Tethers by Two Mobile Robots Proceeding of the 11th World Congress in Mechanism and Machine Science April 1-4, 2004, Tianjin, China China Machine Press, edited by Tian Huang Cooperative Conveyance of an Object with Tethers by Two

More information

Ball tracking with velocity based on Monte-Carlo localization

Ball tracking with velocity based on Monte-Carlo localization Book Title Book Editors IOS Press, 23 1 Ball tracking with velocity based on Monte-Carlo localization Jun Inoue a,1, Akira Ishino b and Ayumi Shinohara c a Department of Informatics, Kyushu University

More information

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images

High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images MECATRONICS - REM 2016 June 15-17, 2016 High-speed Three-dimensional Mapping by Direct Estimation of a Small Motion Using Range Images Shinta Nozaki and Masashi Kimura School of Science and Engineering

More information

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

Vol. 21 No. 6, pp ,

Vol. 21 No. 6, pp , Vol. 21 No. 6, pp.69 696, 23 69 3 3 3 Map Generation of a Mobile Robot by Integrating Omnidirectional Stereo and Laser Range Finder Yoshiro Negishi 3, Jun Miura 3 and Yoshiaki Shirai 3 This paper describes

More information

Control of Walking Robot by Inverse Dynamics of Link Mechanisms Using FEM

Control of Walking Robot by Inverse Dynamics of Link Mechanisms Using FEM Copyright c 2007 ICCES ICCES, vol.2, no.4, pp.131-136, 2007 Control of Walking Robot by Inverse Dynamics of Link Mechanisms Using FEM S. Okamoto 1 and H. Noguchi 2 Summary This paper presents a control

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Mapping textures on 3D geometric model using reflectance image

Mapping textures on 3D geometric model using reflectance image Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi Ikeuchi The University of Tokyo Cyra Technologies, Inc. The University of Tokyo fkurazume,kig@cvl.iis.u-tokyo.ac.jp

More information

Position Detection on Two-Dimensional Signal Transmission Sheet by Magnetic Field Pattern Sensing

Position Detection on Two-Dimensional Signal Transmission Sheet by Magnetic Field Pattern Sensing Position Detection on Two-Dimensional Signal Transmission Sheet by Magnetic Field Pattern Sensing Kei Nakatsuma *, Yasuaki Monnai *, Yasutoshi Makino *, and Hiroyuki Shinoda * This paper proposes a method

More information

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Thomas Röfer 1, Tim Laue 1, and Dirk Thomas 2 1 Center for Computing Technology (TZI), FB 3, Universität Bremen roefer@tzi.de,

More information

Circle Fitting Based Position Measurement System Using Laser Range Finder in Construction Fields

Circle Fitting Based Position Measurement System Using Laser Range Finder in Construction Fields The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Circle Fitting Based Position Measurement System Using Laser Range Finder in Construction

More information

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen Abstract: An XBOX 360 Kinect is used to develop two applications to control the desktop cursor of a Windows computer. Application

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart

Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart Panoramic Vision and LRF Sensor Fusion Based Human Identification and Tracking for Autonomous Luggage Cart Mehrez Kristou, Akihisa Ohya and Shin ichi Yuta Intelligent Robot Laboratory, University of Tsukuba,

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Efficient Surface and Feature Estimation in RGBD

Efficient Surface and Feature Estimation in RGBD Efficient Surface and Feature Estimation in RGBD Zoltan-Csaba Marton, Dejan Pangercic, Michael Beetz Intelligent Autonomous Systems Group Technische Universität München RGB-D Workshop on 3D Perception

More information

Wide area tracking method for augmented reality supporting nuclear power plant maintenance work

Wide area tracking method for augmented reality supporting nuclear power plant maintenance work Journal of Marine Science and Application, Vol.6, No.1, January 2006, PP***-*** Wide area tracking method for augmented reality supporting nuclear power plant maintenance work ISHII Hirotake 1, YAN Weida

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information

CONTROL ALGORITHM OP THE WALKER CLIMBING OVER OBSTACLES. D.E. Okhotsimski, A.K, Platonov U S S R

CONTROL ALGORITHM OP THE WALKER CLIMBING OVER OBSTACLES. D.E. Okhotsimski, A.K, Platonov U S S R Session 11 CONTROL ALGORITHM OP THE WALKER CLIMBING OVER OBSTACLES Robot Implementations D.E. Okhotsimski, A.K, Platonov U S S R Abstract. The paper deals with the problem of development the multilevel

More information

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Kazuki Sakamoto, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract

More information

Absolute Scale Structure from Motion Using a Refractive Plate

Absolute Scale Structure from Motion Using a Refractive Plate Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and

More information

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds 1 Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds Takeru Niwa 1 and Hiroshi Masuda 2 1 The University of Electro-Communications, takeru.niwa@uec.ac.jp 2 The University

More information

A Robot Recognizing Everyday Objects

A Robot Recognizing Everyday Objects A Robot Recognizing Everyday Objects -- Towards Robot as Autonomous Knowledge Media -- Hideaki Takeda Atsushi Ueno Motoki Saji, Tsuyoshi Nakano Kei Miyamato The National Institute of Informatics Nara Institute

More information

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning

ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning 1 ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning Petri Rönnholm Aalto University 2 Learning objectives To recognize applications of laser scanning To understand principles

More information

Multi-view Surface Inspection Using a Rotating Table

Multi-view Surface Inspection Using a Rotating Table https://doi.org/10.2352/issn.2470-1173.2018.09.iriacv-278 2018, Society for Imaging Science and Technology Multi-view Surface Inspection Using a Rotating Table Tomoya Kaichi, Shohei Mori, Hideo Saito,

More information

Expanding gait identification methods from straight to curved trajectories

Expanding gait identification methods from straight to curved trajectories Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods

More information

Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots

Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots Jorge L. Martínez, Jesús Morales, Antonio, J. Reina, Anthony Mandow, Alejandro Pequeño-Boter*, and

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Human trajectory tracking using a single omnidirectional camera

Human trajectory tracking using a single omnidirectional camera Human trajectory tracking using a single omnidirectional camera Atsushi Kawasaki, Dao Huu Hung and Hideo Saito Graduate School of Science and Technology Keio University 3-14-1, Hiyoshi, Kohoku-Ku, Yokohama,

More information

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol.2, II-131 II-137, Dec. 2001. Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

More information

Shape Modeling of A String And Recognition Using Distance Sensor

Shape Modeling of A String And Recognition Using Distance Sensor Proceedings of the 24th IEEE International Symposium on Robot and Human Interactive Communication Kobe, Japan, Aug 31 - Sept 4, 2015 Shape Modeling of A String And Recognition Using Distance Sensor Keisuke

More information

Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment

Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment Scan-point Planning and 3-D Map Building for a 3-D Laser Range Scanner in an Outdoor Environment Keiji NAGATANI 1, Takayuki Matsuzawa 1, and Kazuya Yoshida 1 Tohoku University Summary. During search missions

More information

Fisheye Camera s Intrinsic Parameter Estimation Using Trajectories of Feature Points Obtained from Camera Rotation

Fisheye Camera s Intrinsic Parameter Estimation Using Trajectories of Feature Points Obtained from Camera Rotation Fisheye Camera s Intrinsic Parameter Estimation Using Trajectories of Feature Points Obtained from Camera Rotation Akihiko Hishigi, Yuki Tanaka, Gakuto Masuyama, and Kazunori Umeda Abstract This paper

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

Gauss-Sigmoid Neural Network

Gauss-Sigmoid Neural Network Gauss-Sigmoid Neural Network Katsunari SHIBATA and Koji ITO Tokyo Institute of Technology, Yokohama, JAPAN shibata@ito.dis.titech.ac.jp Abstract- Recently RBF(Radial Basis Function)-based networks have

More information

A Method of Annotation Extraction from Paper Documents Using Alignment Based on Local Arrangements of Feature Points

A Method of Annotation Extraction from Paper Documents Using Alignment Based on Local Arrangements of Feature Points A Method of Annotation Extraction from Paper Documents Using Alignment Based on Local Arrangements of Feature Points Tomohiro Nakai, Koichi Kise, Masakazu Iwamura Graduate School of Engineering, Osaka

More information

DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING

DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING Tomasz Żabiński, Tomasz Grygiel, Bogdan Kwolek Rzeszów University of Technology, W. Pola 2, 35-959 Rzeszów, Poland tomz, bkwolek@prz-rzeszow.pl

More information

A High Speed Face Measurement System

A High Speed Face Measurement System A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555

More information

REMARKS ON MARKERLESS HUMAN MOTION CAPTURE USING MULTIPLE IMAGES OF 3D ARTICULATED HUMAN CG MODEL

REMARKS ON MARKERLESS HUMAN MOTION CAPTURE USING MULTIPLE IMAGES OF 3D ARTICULATED HUMAN CG MODEL 18th European Signal Processing Conference (EUSIPCO-2010) Aalborg, Denmar, August 23-27, 2010 REMARKS ON MARKERLESS HUMAN MOTION CAPTURE USING MULTIPLE IMAGES OF 3D ARTICULATED HUMAN CG MODEL Kazuhio Taahashi

More information

3D Tracking Using Two High-Speed Vision Systems

3D Tracking Using Two High-Speed Vision Systems 3D Tracking Using Two High-Speed Vision Systems Yoshihiro NAKABO 1, Idaku ISHII 2, Masatoshi ISHIKAWA 3 1 University of Tokyo, Tokyo, Japan, nakabo@k2.t.u-tokyo.ac.jp 2 Tokyo University of Agriculture

More information

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion

Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion Noriaki Mitsunaga and Minoru Asada Dept. of Adaptive Machine Systems, Osaka University,

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions Akitsugu Noguchi and Keiji Yanai Department of Computer Science, The University of Electro-Communications, 1-5-1 Chofugaoka,

More information

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information

More information

A Robust Gesture Recognition Using Depth Data

A Robust Gesture Recognition Using Depth Data A Robust Gesture Recognition Using Depth Data Hironori Takimoto, Jaemin Lee, and Akihiro Kanagawa In recent years, gesture recognition methods using depth sensor such as Kinect sensor and TOF sensor have

More information

Mobile Phone Operations using Human Eyes Only and its Applications

Mobile Phone Operations using Human Eyes Only and its Applications Vol. 9, No. 3, 28 Mobile Phone Operations using Human Eyes Only and its Applications Kohei Arai Information Science Department Graduate School of Science and Engineering, Saga University Saga City, Japan

More information

Mirror Based Framework for Human Body Measurement

Mirror Based Framework for Human Body Measurement 362 Mirror Based Framework for Human Body Measurement 1 Takeshi Hashimoto, 2 Takayuki Suzuki, 3 András Rövid 1 Dept. of Electrical and Electronics Engineering, Shizuoka University 5-1, 3-chome Johoku,

More information

A sliding walk method for humanoid robots using ZMP feedback control

A sliding walk method for humanoid robots using ZMP feedback control A sliding walk method for humanoid robots using MP feedback control Satoki Tsuichihara, Masanao Koeda, Seiji Sugiyama, and Tsuneo oshikawa Abstract In this paper, we propose two methods for a highly stable

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

REFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES

REFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES REFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES T. Yamakawa a, K. Fukano a,r. Onodera a, H. Masuda a, * a Dept. of Mechanical Engineering and Intelligent Systems, The University of Electro-Communications,

More information

Real-Time Document Image Retrieval for a 10 Million Pages Database with a Memory Efficient and Stability Improved LLAH

Real-Time Document Image Retrieval for a 10 Million Pages Database with a Memory Efficient and Stability Improved LLAH 2011 International Conference on Document Analysis and Recognition Real-Time Document Image Retrieval for a 10 Million Pages Database with a Memory Efficient and Stability Improved LLAH Kazutaka Takeda,

More information