Advanced Soft Remote Control System in Human-friendliness
|
|
- Evelyn Hood
- 5 years ago
- Views:
Transcription
1 TH-J2-1 Tokyo, Japan (September 20-24, 2006) Advanced Soft Remote Control System in Human-friendliness Seung-Eun Yang 1, Jun-Hyeong Do 2, Hyoyoung Jang 3, Jin-Woo Jung 4, and Zeungnam Bien 5 1, 2, 3, 5 Department of Electrical Engineering and Computer Science, KAIST Human-friendly Welfare Robot System Engineering Research Center, KAIST Guseong-dong, Yuseong-gu, Daejeon, , Korea {seyang 1, jhdo 2, hyjang 3, jinwoo 4 }@ctrsys.kaist.ac.kr, zbien@ee.kaist.ac.kr 5 Abstract This paper describes a human-friendly interface based on hand gesture. Using two USB pan-tilt cameras, this system enables the user to operate home appliances with hand pointing and ten hand motions free from the user s position. The main contribution of the system lies in its user-friendly initialization methods including interaction between the user and the system. The user can initialize or modify the map describing the location of home appliances to control simply by hand gesture. Embedded feedback mechanism enhances the overall performance by adjusting user s miss pointing. Keywords: 3D position recognition, Hand pointing command, Arbitrary camera location I. INTRODUCTION following two issues: (1) by using two USB pan-tilt cameras, production and installation cost can be reduced and (2) newly adapted user-friendly initialization methods enhance usability. The remainder of this paper is organized as follows: Section 2 introduces overall configuration of the advanced Soft Remocon System, Section 3 provides details of the targeted home appliance location initialization method, Section 4 shows experimental results with discussion, and Section 5 summarizes the study and concludes. II. ADVANCED SOFT REMOTE CONTROL SYSTEM Recent advances in computer science are extended into the concept of ubiquitous, pervasive or convergence, computing, where the user-friendly interfaces for effective and efficient operation of the devices are required. Gesture is one of the intuitive means in user friendly interface. Gesture-based interfaces achieve superiority over interfaces based on speech recognition in two reasons: in voice-operated system, a sensitive microphone nearby the user is essential, and gesture is more convenient to express spatial information. Nowadays, many home appliances are controlled by remote controllers. However, it is cumbersome because it requires the user to keep remote controllers close to him/her and select the appropriate one every time he/she wants to control some appliances. For the elderly or disabled people, these inconveniences are more serious. Recently, studies on gesture recognition to control home appliances have been attempted, but few of them focused on hand recognition to control multiple home appliances in the unstructured environment [1, 10-13]. We have already developed the Soft Remote Control System to control multiple home appliances based on hand gesture using three CCD cameras attached on the ceiling [1-3]. In this paper, as an extension of the previous Soft Remote Control System, we propose an improved human-friendly gesture-based interface system which is focusing on the Figure 1. Conventional soft remote control system Figure 2. Advanced soft remote control system Figure 1 shows the conventional soft remote control system using 3 CCD cameras attached on the ceiling [1, 9]. Because the position of each camera is fixed, the available range of user position for command is rather limited even though they have pan/tilt capability. On the other hand, the advanced system shown in Figure 2 allows arbitrary position of two USB cameras. So, user can move them wherever he/she wants to set them up in the residential place. With the pattern that is attached on the side of one camera, the advanced system builds relative 3D axis. It makes possible to calculate relative position of home appliances in arbitrary camera location. For the user friendliness, the system use simple hand pointing gesture to calculate 3D position of home appliances. Because of the diversity of pointing gesture depending on people, we define the pointing command as stretching user s hand toward an object so the center of the object is concealed by the hand. When a user wants to control a certain home appliance, he/she points the appliance with his/her hand to select it. But user can t point it perfectly every time. If user failed to point
2 the appliance, system finds the nearest appliance and let user know the direction and distance from the pointed position to the position of the nearest appliance. This concept of feedback enables user to select proper appliance, so the errors in recognizing the user s pointing directions will be reduced. To control each home appliance, 10 different hand gestures are used as shown in Figure 3. With the assumption that a hand motion as a command gesture is performed in high speed and linking motions are generated before and after commands, the system determines the start and end points of commanding hand motion. For the recognition of hand command motion, a hierarchical classifier based on HMM is adopted [9]. (a) 1-dimensional motion Figure 5. Flow chart of home appliance position recognition B. Recognition of the pattern (b) 2-dimensional motion Figure 3. Command gesture using hand motions III. USER-FRIENDLY INITIALIZATION A. Overall structure of initialization Figure 4 and 5 shows flow chart of home appliance position storing and recognition respectively. For face/hand detection and tracking, we adopt a dynamic cascade structure using multimodal cues which is used in the former system [6]. The system first recognizes a specific pattern automatically which is attached on the side of a camera by panning reference camera. Then, it calculates distance and angle between two cameras based on the result from the pattern analysis. With the information, it can calculate 3D position of the user s face and hand. To store the 3D position of the home appliance which the user wants to control, he/she execute hand pointing command at different two places. When user is pointing, the system calculates directional vector from user s face to hand. The cross point, or the center of perpendicular of two extended directional vectors is stored as the 3D position of the pointed appliance. The information that we need to formulate relative axis from the pattern is the distance and angle between two cameras. Because the view of USB camera is limited, the size of pattern should be small enough. In order to achieve the needed information in arbitrary environment, we utilize a pattern which consists of red colored circles as shown in Figure 9. Figure 6. Procedure of pattern recognition Figure 4. Flow chart of home appliance position storing Figure 6 shows the pattern recognition procedure. Since RGB components are sensitive in light condition, we convert RGB image to YCrCb and discard Y component which contains luminance information. After split into Cr (red color information) and Cb (blue color information), detect red blobs using threshold. During the procedure, closing method is adopted to remove the fragments of red blobs. In order to remove other red blobs that are not belonged to the patter, we adopt the characteristics of distances between the red circles of the pattern. As shown in Figure 7, we can easily discriminate between 9 circles of the pattern and other two red blobs
3 D. Calculation of 3D position To calculate the 3D position of home appliance, we should know the 3D position of user s face and hand first. Figure 7. Distance plot between centers of red area C. Calculation of the distance and angle between two cameras After recognizing the pattern, we should calculate the distance and angle between two cameras. For this, we measured focal length of the camera using GML MatLab Camera Calibration Toolbox and MATLAB TM (MATHWORKS, USA) [8]. Figure 10. Calculation of 3D position After detection of user s face/hand, we find the two lines, s1 and s 2 from center of each camera C, C and the M,1 M,2 user s face/hand position p 1, p in each camera image, 2 respectively as shown in Figure 10 [7]. Then, we get the 3D position by finding the midpoint of the line segment PM,1 P which is perpendicular to both s M,2 1 and s 2. But in this case, if the two cameras are located too closely, error will be increased because of limited information from stereo image. So, for this system, we assume the minimum distance between cameras is 0.5m. E. Storage of home appliance 3D position Figure 8. Distance calculation from camera to pattern Figure 8 shows the way to calculate the distance between two cameras. Using focal length, we can calculate the length a1 and b1. And b2 is the distance between circles on the pattern, so, we already know the length. Then, we can calculate the distance from the reference camera to the pattern by the proportionality. Figure 11. Storing 3D position of home appliance by user Figure 9. Angle calculation using pattern We can form a triangle using two columns on the pattern and the center point of reference camera shown in Figure 9. The distance from a camera to each column on the pattern can be calculated using proportionality which was discussed in Figure 8. If we know the three lengths of triangle, we can calculate the inside angle using the law of cosines. After calculating the 3D position of user s face/hand by the method discussed in section D, we can get a directional vector which starts from user s face to hand. To calculate 3D axis of a specific object, user should point to the object from different two positions as shown in Figure 11. When user moves from position 1 to position 2, two USB cameras track user using pan-tilt function. In this case, to modify the relation about 3D axis, we defined translation matrix for panning and tilting angle. F. Home appliance selection through hand pointing command and feedback for wrong pointing command To select home appliance, user performs hand pointing command. The procedure to select home appliance is shown in Figure
4 Figure 12. Storing 3D position of home appliance by To recognize the position of home appliance, we extend the directional vector which is start from user s face and ends at hand. The extension rate is determined by the length from user s face to each home appliance. Figure 12 shows the case that the positions of 3 home appliances are stored. At first, the system calculates the distance between user and each appliance. And calculate the candidate position by extending directional vector for each extension rate. If the candidate position is included in the selection range which is shown as 3 boxes in Figure 12, then the device is selected. The above shows the case that VCR is selected. But, image data is very sensitive to environment. And hand pointing command can t be done precisely every time. So, false recognition is inevitable problem. The most incident situation is that noting is selected when user is pointing to specific appliance. To reduce this problem and accommodate user interaction, we provide feedback information. The feedback tells user the nearest positioned appliance from pointed 3D position by user. It also provides the information of direction and distance to select properly the nearest positioned appliance. by user s view point. Every Y axis is parallel in the figure and we already know the directional vector from center of reference axis to stored home appliance position. So, the only information for axis transformation is the angle between axis_a and axis_d. The directional vector DIR is not always perpendicular to Plane C. But the imaginary plane DCF (Plane D) is always perpendicular both Plane B and Plane C. Because X_b and X_a are parallel, we only have to know the value of Ɵ to formulate translation matrix between axis_a and axis_d by kinematics. Through calculating the angle 90 o +Ɵ between unit directional vector u(1,0,0) and Plane D, it s possible to define translation matrix. Using the translation matrix, we can define user dependant axis from reference axis. IV. EXPERIMENTAL RESULTS A. 3D position of home appliance The stored 3D position of each home appliance is very important for the operation of entire system. Because, if the stored position is wrong, even though user points to appliance many times, he/she fails to select proper appliance. To evaluate the accuracy of proposed system, we executed experiment and the results are as follows. Figure 14. Measured position and result position from the system Figure 13. Axis conversion from reference to user s view dependent axis But the positional information should be transformed correspond to user because the view point is changed according to user s seeing direction and position. Figure 13 shows the procedure of axis transformation where the axis_a is determined by the reference camera and axis_d is determined Figure 15. 3D position from different two subjects Figure 14 shows the result of storing three different home appliance positions (The unit of each axis is cm). The point inside rectangle is measured point and other 4 points in a circle is experimental results. We can see that each result positions are in a cluster around measured point
5 Figure 15 shows the result of different two subjects. The result from different subject is discriminated by rectangle. We can find the variation between the results especially in case A and case C. Because people can t always do point exactly same, the variation of result is inevitable. But, the result shows that each person has their own tendency to point something. And additional experiments show that the result is very sensitive in different environment especially light condition. The result of cluster C in Figure 15 shows the case (One point is located exceptionally far from other points). B. Recognition of home appliance position and feedback Because each user has own tendency at pointing, it s more meaningful to analyze the result of home appliance position recognition from stored position by same subjects. Table 1 shows the recognition rate. User pointed 5 times to each appliance (Appliance A ~ Appliance C) respectively, and ten experiments were executed. Table 1. Success rate of home appliance selection Appliance A Appliance B Appliance C Success rate 96% (48/50) 84% (42/50) 88% (44/50) Table 2 shows the result of feedback. To evaluate the accuracy of feedback, we checked if the nearest positioned appliance is detected or not when no appliance is selected. And also, when the nearest positioned appliance is selected correctly, we observed whether the feedback direction is correct or not. Each experiment was done separately. Table 2. The accuracy of feedback Nearest appliance detection Feedback direction Success rate 89% (89/100) 83% (83/100) V. CONCLUDING REMARKS In this paper, we proposed a method to calculate 3D position and recognize it only by means of hand pointing command. This executes indispensable task, initialization for soft remote control system which is capable of controlling various functions of home appliances through natural hand gesture. The method allows arbitrary camera position, so user position for command can be expanded. Moreover, it provides feedback when user s pointing command is ambiguous. These not only enhance overall performance of the system, but also provide user friendly interface. So, the method is also applicable to other interface between human and intelligent system. For the further study, we will focus on enhancing the robustness of the system by using user-adaptation technology and devising new pattern. REFERENCES [1] J.-H. Do, J.-B. Kim, K.-H. Park, W.-C. Bang and Z.Z. Bien, Soft Remote Control System using Hand Pointing Gesture, Int. Journal of Humanfriendly Welfare Robotic Systems, vol.3, no.1, pp.27-30, March [2] J.-W. Jung, J.-H. Do, Y.-M. Kim, K.-S. Suh, D.-J. Kim, and Z. Bien, Advanced robotic residence for the elderly/the handicapped: realization and user evaluation, Proc. of the 9th Int. Conf. on Rehabilitation Robotics, pp , [3] Dimitar H. Stefanov, Z. Zenn Bien, and Won-Chul Bang, "The Smart House for Older Persons and Persons With Physical Disabilities: Structure, Technology Arrangements, and Perspectives," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 12, no. 2, pp., June 2004 [4] Z. Zenn Bien, Kwang-Hyun Park, Jin-Woo Jung and Jun-Hyeong Do, "Intention Reading is Essential in Human-friendly Interfaces for the Elderly and the Handicapped," IEEE Transactions on Industrial Electronics, vol. 52, no. 6, pp , 2005 [5] Hyoyoung Jang, Jun-Hyeong Do, Jinwoo Jung, Kwang-Hyun Park, and Z. Zenn Bien, View-invariant Hand-posture Recognition Method for Soft- Remocon System, Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2004), Alberat, Canada, pp , September 28-October 2, 2004 [6] J.-H. Do and Z. Bien, A Dynamic Cascade Structure Using Multimodal Cues for Fast and Robust Face Detection in Videos, Pattern Recognition Letters, submitted, [7] M. Kohler, Vision Based Remote Control in Intelligent Home Environments, Proceedings of 3D Image Analysis and Synthesis, Erlangen, Germany, pp , November 18-19, 1996 [8] Vladimir Vezhnevets, GML Matlab Camera Calibration Toolbox, Graphics & Media Lab, Dept. of Computer Science, Moscow State University, November 1, 2005 [9] Jun-Hyeong Do, Hyoyoung Jang, Sung Hoon Jung, Jinwoo Jung and Zeungnam Bien, Soft Remote Control System in the Intelligent Sweet Home, Proc. Of IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, Canada, pp , August 2-6, 2005 [10] N. Jojic, B. Brumitt, et. al, Detection and Estimation of Pointing Gestures in Dense Disparity Maps, Automatic Face and Gesture Recognition, Proc. 4 IEEE Int. conf. on, pp , [11] S. Sato and S. Sakane, A Human-Robot Interface Using an Interactive Hand Pointer that Projects a Mark in the Real Work Space, Proc. of the 2000 IEEE ICRA, pp , April [12] C. Colombo, A. D. Bimbo and A. Valli, Visual Capture and Understanding of Hand Pointing Actions in a 3-D Environment, IEEE Tr. on systems, man, and cybernetics, Part B: Cybernetics, vol. 33, no. 4, pp , August, [13] K. Irie, N. Wakakmura, and K. Umeda, Construction of an Intelligent Room Based on Gesture Recognition, Proc. of IEEE Int. conf. on IROS, pp , ACKNOWLEDGEMENT This Work is fully supported by the SRC/ERC program of MOST/KOSEF (Grant #R )
Recognition of Finger Pointing Direction Using Color Clustering and Image Segmentation
September 14-17, 2013, Nagoya University, Nagoya, Japan Recognition of Finger Pointing Direction Using Color Clustering and Image Segmentation Hidetsugu Asano 1, Takeshi Nagayasu 2, Tatsuya Orimo 1, Kenji
More informationAn Interactive Technique for Robot Control by Using Image Processing Method
An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.
More informationA Kinect Sensor based Windows Control Interface
, pp.113-124 http://dx.doi.org/10.14257/ijca.2014.7.3.12 A Kinect Sensor based Windows Control Interface Sang-Hyuk Lee 1 and Seung-Hyun Oh 2 Department of Computer Science, Dongguk University, Gyeongju,
More informationColor Space Projection, Feature Fusion and Concurrent Neural Modules for Biometric Image Recognition
Proceedings of the 5th WSEAS Int. Conf. on COMPUTATIONAL INTELLIGENCE, MAN-MACHINE SYSTEMS AND CYBERNETICS, Venice, Italy, November 20-22, 2006 286 Color Space Projection, Fusion and Concurrent Neural
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationRealtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments
Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems pp. 31-36, Maui, Hawaii, Oct./Nov. 2001. Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Hiroshi
More informationMouse Pointer Tracking with Eyes
Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationDetection of Small-Waving Hand by Distributed Camera System
Detection of Small-Waving Hand by Distributed Camera System Kenji Terabayashi, Hidetsugu Asano, Takeshi Nagayasu, Tatsuya Orimo, Mutsumi Ohta, Takaaki Oiwa, and Kazunori Umeda Department of Mechanical
More informationInternational Journal of Advance Engineering and Research Development
Scientific Journal of Impact Factor (SJIF): 4.14 International Journal of Advance Engineering and Research Development Volume 3, Issue 3, March -2016 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Research
More informationRobust color segmentation algorithms in illumination variation conditions
286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,
More informationAN EFFICIENT METHOD FOR HUMAN POINTING ESTIMATION FOR ROBOT INTERACTION. Satoshi Ueno, Sei Naito, and Tsuhan Chen
AN EFFICIENT METHOD FOR HUMAN POINTING ESTIMATION FOR ROBOT INTERACTION Satoshi Ueno, Sei Naito, and Tsuhan Chen KDDI R&D Laboratories, Inc. Cornell University ABSTRACT In this paper, we propose an efficient
More informationTask analysis based on observing hands and objects by vision
Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In
More informationFace Recognition Using Long Haar-like Filters
Face Recognition Using Long Haar-like Filters Y. Higashijima 1, S. Takano 1, and K. Niijima 1 1 Department of Informatics, Kyushu University, Japan. Email: {y-higasi, takano, niijima}@i.kyushu-u.ac.jp
More informationDEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE. Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1
DEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1 Dept. of Electrical Engineering and Computer Science Korea Advanced Institute of Science
More informationDEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER
S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,
More informationMulti-Modal Human Verification Using Face and Speech
22 Multi-Modal Human Verification Using Face and Speech Changhan Park 1 and Joonki Paik 2 1 Advanced Technology R&D Center, Samsung Thales Co., Ltd., 2 Graduate School of Advanced Imaging Science, Multimedia,
More informationDepartment of Game Mobile Contents, Keimyung University, Daemyung3-Dong Nam-Gu, Daegu , Korea
Image quality enhancement of computational integral imaging reconstruction for partially occluded objects using binary weighting mask on occlusion areas Joon-Jae Lee, 1 Byung-Gook Lee, 2 and Hoon Yoo 3,
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationEfficient and Fast Multi-View Face Detection Based on Feature Transformation
Efficient and Fast Multi-View Face Detection Based on Feature Transformation Dongyoon Han*, Jiwhan Kim*, Jeongwoo Ju*, Injae Lee**, Jihun Cha**, Junmo Kim* *Department of EECS, Korea Advanced Institute
More informationModeling Body Motion Posture Recognition Using 2D-Skeleton Angle Feature
2012 International Conference on Image, Vision and Computing (ICIVC 2012) IPCSIT vol. 50 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V50.1 Modeling Body Motion Posture Recognition Using
More informationPRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using
PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,
More informationNon-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs
More informationMULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION
MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of
More informationA threshold decision of the object image by using the smart tag
A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (
More informationRobot localization method based on visual features and their geometric relationship
, pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department
More informationA Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape
, pp.89-94 http://dx.doi.org/10.14257/astl.2016.122.17 A Robust Hand Gesture Recognition Using Combined Moment Invariants in Hand Shape Seungmin Leem 1, Hyeonseok Jeong 1, Yonghwan Lee 2, Sungyoung Kim
More informationSimultaneous surface texture classification and illumination tilt angle prediction
Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona
More informationA Combined Method for On-Line Signature Verification
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 14, No 2 Sofia 2014 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.2478/cait-2014-0022 A Combined Method for On-Line
More informationAUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE
AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE Md. Akhtaruzzaman, Amir A. Shafie and Md. Raisuddin Khan Department of Mechatronics Engineering, Kulliyyah of Engineering, International
More informationAdaptive Gesture Recognition System Integrating Multiple Inputs
Adaptive Gesture Recognition System Integrating Multiple Inputs Master Thesis - Colloquium Tobias Staron University of Hamburg Faculty of Mathematics, Informatics and Natural Sciences Technical Aspects
More informationHead Pose Estimation by using Morphological Property of Disparity Map
Head Pose Estimation by using Morphological Property of Disparity Map Sewoong Jun*, Sung-Kee Park* and Moonkey Lee** *Intelligent Robotics Research Center, Korea Institute Science and Technology, Seoul,
More informationGraph-based High Level Motion Segmentation using Normalized Cuts
Graph-based High Level Motion Segmentation using Normalized Cuts Sungju Yun, Anjin Park and Keechul Jung Abstract Motion capture devices have been utilized in producing several contents, such as movies
More informationOptimization of Observation Membership Function By Particle Swarm Method for Enhancing Performances of Speaker Identification
Proceedings of the 6th WSEAS International Conference on SIGNAL PROCESSING, Dallas, Texas, USA, March 22-24, 2007 52 Optimization of Observation Membership Function By Particle Swarm Method for Enhancing
More informationAlgorithm research of 3D point cloud registration based on iterative closest point 1
Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationProc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992
Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp.957-962, Kobe, Japan, September 1992 Tracking a Moving Object by an Active Vision System: PANTHER-VZ Jun Miura, Hideharu Kawarabayashi,
More informationExpanding gait identification methods from straight to curved trajectories
Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods
More informationA Study of Medical Image Analysis System
Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun
More informationFace Detection Using Color Based Segmentation and Morphological Processing A Case Study
Face Detection Using Color Based Segmentation and Morphological Processing A Case Study Dr. Arti Khaparde*, Sowmya Reddy.Y Swetha Ravipudi *Professor of ECE, Bharath Institute of Science and Technology
More informationCLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING
CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING Kenta Fukano 1, and Hiroshi Masuda 2 1) Graduate student, Department of Intelligence Mechanical Engineering, The University of Electro-Communications,
More informationRecognition of Non-symmetric Faces Using Principal Component Analysis
Recognition of Non-symmetric Faces Using Principal Component Analysis N. Krishnan Centre for Information Technology & Engineering Manonmaniam Sundaranar University, Tirunelveli-627012, India Krishnan17563@yahoo.com
More informationVisual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing
Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Minoru Asada, Takamaro Tanaka, and Koh Hosoda Adaptive Machine Systems Graduate School of Engineering Osaka University, Suita,
More informationDevelopment of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras
Proceedings of the 5th IIAE International Conference on Industrial Application Engineering 2017 Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Hui-Yuan Chan *, Ting-Hao
More informationReduced Image Noise on Shape Recognition Using Singular Value Decomposition for Pick and Place Robotic Systems
Reduced Image Noise on Shape Recognition Using Singular Value Decomposition for Pick and Place Robotic Systems Angelo A. Beltran Jr. 1, Christian Deus T. Cayao 2, Jay-K V. Delicana 3, Benjamin B. Agraan
More informationColor Content Based Image Classification
Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color
More informationDetecting Fingertip Method and Gesture Usability Research for Smart TV. Won-Jong Yoon and Jun-dong Cho
Detecting Fingertip Method and Gesture Usability Research for Smart TV Won-Jong Yoon and Jun-dong Cho Outline Previous work Scenario Finger detection Block diagram Result image Performance Usability test
More informationAn Analysis of Website Accessibility of Private Industries -Focusing on the Time for Compulsory Compliance with Web Accessibility Guidelines in Korea-
, pp.321-326 http://dx.doi.org/10.14257/astl.2013.29.68 An Analysis of Website Accessibility of Private Industries -Focusing on the Time for Compulsory Compliance with Web Accessibility Guidelines in Korea-
More informationAdaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision
Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China
More informationEE368 Project: Visual Code Marker Detection
EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.
More informationReal-time 3-D Hand Posture Estimation based on 2-D Appearance Retrieval Using Monocular Camera
Real-time 3-D Hand Posture Estimation based on 2-D Appearance Retrieval Using Monocular Camera Nobutaka Shimada, Kousuke Kimura and Yoshiaki Shirai Dept. of Computer-Controlled Mechanical Systems, Osaka
More informationLIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION
F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,
More informationA method for depth-based hand tracing
A method for depth-based hand tracing Khoa Ha University of Maryland, College Park khoaha@umd.edu Abstract An algorithm for natural human-computer interaction via in-air drawing is detailed. We discuss
More informationTime-to-Contact from Image Intensity
Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract
More informationAbstract. 1 Introduction. 2 Motivation. Information and Communication Engineering October 29th 2010
Information and Communication Engineering October 29th 2010 A Survey on Head Pose Estimation from Low Resolution Image Sato Laboratory M1, 48-106416, Isarun CHAMVEHA Abstract Recognizing the head pose
More informationRobotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors
INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 13, No. 3, pp. 387-393 MARCH 2012 / 387 DOI: 10.1007/s12541-012-0049-8 Robotic Grasping Based on Efficient Tracking and Visual Servoing
More informationVisual Servoing Utilizing Zoom Mechanism
IEEE Int. Conf. on Robotics and Automation 1995, pp.178 183, Nagoya, May. 12 16, 1995 1 Visual Servoing Utilizing Zoom Mechanism Koh HOSODA, Hitoshi MORIYAMA and Minoru ASADA Dept. of Mechanical Engineering
More informationCooperative Targeting: Detection and Tracking of Small Objects with a Dual Camera System
Cooperative Targeting: Detection and Tracking of Small Objects with a Dual Camera System Moein Shakeri and Hong Zhang Abstract Surveillance of a scene with computer vision faces the challenge of meeting
More informationA Two-stage Scheme for Dynamic Hand Gesture Recognition
A Two-stage Scheme for Dynamic Hand Gesture Recognition James P. Mammen, Subhasis Chaudhuri and Tushar Agrawal (james,sc,tush)@ee.iitb.ac.in Department of Electrical Engg. Indian Institute of Technology,
More informationGesture Recognition using Temporal Templates with disparity information
8- MVA7 IAPR Conference on Machine Vision Applications, May 6-8, 7, Tokyo, JAPAN Gesture Recognition using Temporal Templates with disparity information Kazunori Onoguchi and Masaaki Sato Hirosaki University
More informationVehicle Detection Method using Haar-like Feature on Real Time System
Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.
More informationA Hybrid Face Detection System using combination of Appearance-based and Feature-based methods
IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009 181 A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods Zahra Sadri
More informationA NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION
A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION A. Hadid, M. Heikkilä, T. Ahonen, and M. Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering
More informationWearable Master Device Using Optical Fiber Curvature Sensors for the Disabled
Proceedings of the 2001 IEEE International Conference on Robotics & Automation Seoul, Korea May 21-26, 2001 Wearable Master Device Using Optical Fiber Curvature Sensors for the Disabled Kyoobin Lee*, Dong-Soo
More informationProduction of Video Images by Computer Controlled Cameras and Its Application to TV Conference System
Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol.2, II-131 II-137, Dec. 2001. Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System
More informationA 100Hz Real-time Sensing System of Textured Range Images
A 100Hz Real-time Sensing System of Textured Range Images Hidetoshi Ishiyama Course of Precision Engineering School of Science and Engineering Chuo University 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551,
More information2 OVERVIEW OF RELATED WORK
Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method
More informationConvergence Point Adjustment Methods for Minimizing Visual Discomfort Due to a Stereoscopic Camera
J. lnf. Commun. Converg. Eng. 1(4): 46-51, Dec. 014 Regular paper Convergence Point Adjustment Methods for Minimizing Visual Discomfort Due to a Stereoscopic Camera Jong-Soo Ha 1, Dae-Woong Kim, and Dong
More informationA deformable model driven method for handling clothes
A deformable model driven method for handling clothes Yasuyo Kita Fuminori Saito Nobuyuki Kita Intelligent Systems Institute, National Institute of Advanced Industrial Science and Technology (AIST) AIST
More informationNOVEL HYBRID GENETIC ALGORITHM WITH HMM BASED IRIS RECOGNITION
NOVEL HYBRID GENETIC ALGORITHM WITH HMM BASED IRIS RECOGNITION * Prof. Dr. Ban Ahmed Mitras ** Ammar Saad Abdul-Jabbar * Dept. of Operation Research & Intelligent Techniques ** Dept. of Mathematics. College
More informationFacial Expression Recognition using Principal Component Analysis with Singular Value Decomposition
ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial
More informationDisguised Face Identification Based Gabor Feature and SVM Classifier
Disguised Face Identification Based Gabor Feature and SVM Classifier KYEKYUNG KIM, SANGSEUNG KANG, YUN KOO CHUNG and SOOYOUNG CHI Department of Intelligent Cognitive Technology Electronics and Telecommunications
More informationResearch on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration
, pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information
More informationFully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information
Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,
More informationURBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES
URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES An Undergraduate Research Scholars Thesis by RUI LIU Submitted to Honors and Undergraduate Research Texas A&M University in partial fulfillment
More informationVIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural
VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD Ertem Tuncel and Levent Onural Electrical and Electronics Engineering Department, Bilkent University, TR-06533, Ankara, Turkey
More information3D HAND LOCALIZATION BY LOW COST WEBCAMS
3D HAND LOCALIZATION BY LOW COST WEBCAMS Cheng-Yuan Ko, Chung-Te Li, Chen-Han Chung, and Liang-Gee Chen DSP/IC Design Lab, Graduated Institute of Electronics Engineering National Taiwan University, Taiwan,
More informationPerson identification from spatio-temporal 3D gait
200 International Conference on Emerging Security Technologies Person identification from spatio-temporal 3D gait Yumi Iwashita Ryosuke Baba Koichi Ogawara Ryo Kurazume Information Science and Electrical
More information3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,
More informationNOVEL PCA-BASED COLOR-TO-GRAY IMAGE CONVERSION. Ja-Won Seo and Seong Dae Kim
NOVEL PCA-BASED COLOR-TO-GRAY IMAGE CONVERSION Ja-Won Seo and Seong Dae Kim Korea Advanced Institute of Science and Technology (KAIST) Department of Electrical Engineering 21 Daehak-ro, Yuseong-gu, Daejeon
More informationEngineering Drawings Recognition Using a Case-based Approach
Engineering Drawings Recognition Using a Case-based Approach Luo Yan Department of Computer Science City University of Hong Kong luoyan@cs.cityu.edu.hk Liu Wenyin Department of Computer Science City University
More informationCS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning
CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning Justin Chen Stanford University justinkchen@stanford.edu Abstract This paper focuses on experimenting with
More informationFOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES
FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES D. Beloborodov a, L. Mestetskiy a a Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,
More information1st frame Figure 1: Ball Trajectory, shadow trajectory and a reference player 48th frame the points S and E is a straight line and the plane formed by
Physics-based 3D Position Analysis of a Soccer Ball from Monocular Image Sequences Taeone Kim, Yongduek Seo, Ki-Sang Hong Dept. of EE, POSTECH San 31 Hyoja Dong, Pohang, 790-784, Republic of Korea Abstract
More informationOperation Improvement of an Indoor Robot by Using Hand Fingers Recognition
Operation Improvement of an Indoor Robot by Using Hand Fingers Recognition K. Suneetha*, M. S. R. Sekhar**, U. Yedukondalu*** * Assistant Professor, (Department of Electronics and Communication Engineering,
More informationVision-Based 3D Fingertip Interface for Spatial Interaction in 3D Integral Imaging System
International Conference on Complex, Intelligent and Software Intensive Systems Vision-Based 3D Fingertip Interface for Spatial Interaction in 3D Integral Imaging System Nam-Woo Kim, Dong-Hak Shin, Dong-Jin
More informationDevelopment of Wall Mobile Robot for Household Use
The 3rd International Conference on Design Engineering and Science, ICDES 2014 Pilsen, Czech Republic, August 31 September 3, 2014 Development of Wall Mobile Robot for Household Use Takahiro DOI* 1, Kenta
More informationResearch Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding
e Scientific World Journal, Article ID 746260, 8 pages http://dx.doi.org/10.1155/2014/746260 Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding Ming-Yi
More informationA Novel Criterion Function in Feature Evaluation. Application to the Classification of Corks.
A Novel Criterion Function in Feature Evaluation. Application to the Classification of Corks. X. Lladó, J. Martí, J. Freixenet, Ll. Pacheco Computer Vision and Robotics Group Institute of Informatics and
More informationAn Efficient Approach for Color Pattern Matching Using Image Mining
An Efficient Approach for Color Pattern Matching Using Image Mining * Manjot Kaur Navjot Kaur Master of Technology in Computer Science & Engineering, Sri Guru Granth Sahib World University, Fatehgarh Sahib,
More informationDynamic Model Of Anthropomorphic Robotics Finger Mechanisms
Vol.3, Issue.2, March-April. 2013 pp-1061-1065 ISSN: 2249-6645 Dynamic Model Of Anthropomorphic Robotics Finger Mechanisms Abdul Haseeb Zaidy, 1 Mohd. Rehan, 2 Abdul Quadir, 3 Mohd. Parvez 4 1234 Mechanical
More informationImplementation of a Smart Ward System in a Hi-Tech Hospital by Using a Kinect Sensor Camera
Volume: 4 sue: 5 531-536 Implementation of a Smart Ward System in a Hi-Tech Hospital by Using a Kinect Sensor Camera Jyothilakshmi P a, K R Rekha b, K R Nataraj ab Research Scholar, Jain University, Bangalore,
More informationA Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models
A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models Gleidson Pegoretti da Silva, Masaki Nakagawa Department of Computer and Information Sciences Tokyo University
More informationStereo Rectification for Equirectangular Images
Stereo Rectification for Equirectangular Images Akira Ohashi, 1,2 Fumito Yamano, 1 Gakuto Masuyama, 1 Kazunori Umeda, 1 Daisuke Fukuda, 2 Kota Irie, 3 Shuzo Kaneko, 2 Junya Murayama, 2 and Yoshitaka Uchida
More informationHAND-GESTURE BASED FILM RESTORATION
HAND-GESTURE BASED FILM RESTORATION Attila Licsár University of Veszprém, Department of Image Processing and Neurocomputing,H-8200 Veszprém, Egyetem u. 0, Hungary Email: licsara@freemail.hu Tamás Szirányi
More informationCONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT
CONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT Wan Ishak Wan Ismail, a, b, Mohd. Hudzari Razali, a a Department of Biological and Agriculture Engineering, Faculty of Engineering b Intelligent System and
More informationIntegration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement
Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Ta Yuan and Murali Subbarao tyuan@sbee.sunysb.edu and murali@sbee.sunysb.edu Department of
More informationAutonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor
International Journal of the Korean Society of Precision Engineering Vol. 4, No. 1, January 2003. Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor Jeong-Woo Jeong 1, Hee-Jun
More informationInteractive PTZ Camera Control System Using Wii Remote and Infrared Sensor Bar
Interactive PTZ Camera Control System Using Wii Remote and Infrared Sensor Bar A. H. W. Goh, Y. S. Yong, C. H. Chan, S. J. Then, L. P. Chu, S. W. Chau, and H. W. Hon International Science Index, Computer
More informationDepartment of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan
Shape Modeling from Multiple View Images Using GAs Satoshi KIRIHARA and Hideo SAITO Department of Electrical Engineering, Keio University 3-14-1 Hiyoshi Kouhoku-ku Yokohama 223, Japan TEL +81-45-563-1141
More information