STUDENT GESTURE RECOGNITION SYSTEM IN CLASSROOM 2.0

Size: px
Start display at page:

Download "STUDENT GESTURE RECOGNITION SYSTEM IN CLASSROOM 2.0"

Transcription

1 STUDENT GESTURE RECOGNITION SYSTEM IN CLASSROOM 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee and Sei-Wang Chen Department of Computer Science and Information Engineering, National Taiwan Normal University No. 88, Section 4, Ting-Chou Road, Taipei, 116, Taiwan, R. O. C. ABSTRACT This paper presents a student gesture recognition system employed in a theatre classroom, which is a subsystem belonging to Classroom 2.0. In this study, a PTZ camera is set up at the front of the classroom to capture video sequences. The system first pre-processes the input sequence to locate the main line of the theatre classroom and to extract the candidates from the foreground pixels. Subsequently, motion and color information is utilized to identify the foreground pixels, which are regarded as the seeds of growth for the foreground regions. The system combines the foreground regions to segment the objects, which represent individual students. Six student gestures, which include raising the right hand, raising the left hand, raising two hands, lying prone, standing up and normal, are classified based on the relationship between the regions in similar objects. The experimental results demonstrate that the proposed method is robust and efficient. KEY WORDS Smart classroom, gesture recognition, object segmentation. 1. Introduction Smart Classroom, a term proposed recently, usually refers to a room housing an instructor station, equipped with a computer and audiovisual equipment, which allows the instructor to teach using a wide variety of media. In Taiwan, the instructor station is referred to as an e-station, which typically composes of a PC with software, a handwriting tablet, an external DVD player, an amplifier, a set of speakers, a set of wired or wireless microphones and an unified control panel. This kind of Smart Classroom, however, is in reality not an example of a smart system. In this paper, we propose the concept of a truly smart classroom, which we will name Classroom 2.0. In our view, a Smart Classroom such as Classroom 2.0 should be intelligent, interactive, individualized, and integrated. In the following paragraphs, we will briefly introduce the properties and relative systems of Classroom 2.0. Intelligent: the classroom technology should automatically conduct tasks that do not require human intervention. To enable this, the following four systems should be employed in the classroom: (1)The intelligent Roll-Call (irollcall) system: the system can automatically recognize students who are present in the classroom. (2)The intelligent Teaching Response (itres) system: the system can automatically control the technologies used in the classroom, including the control of the software systems. (3)The intelligent Classroom Exception Recognition (icerec) system: the system can automatically identify and track human behavior in the classroom. This system is intended to automatically track the behavior of students in a classroom and is a revolutionary way of collecting data for educational research. (4)The intelligent Content Retrieval (icore) system: the system can automatically give feedback on the quality of an answer to a given question. Interactive: the classroom technology should facilitate interaction between the instructor and the students. Figure 1. Example of three input sequences.

2 Video sequence input Motion detection Foreground pixel extraction Main line location Foreground pixel identification Region growing Object segmentation Gesture recognition Figure 2. Flowchart of the student gesture recognition system. Individualize: the classroom technology should react in accordance with the behavior of individual users. Integrated: the classroom technologies could be integrated as a single i 4 system instead of being made up of separate systems. From the above description, it is evident that irollcall, itres, icerec, and icore are the four kernel systems of Classroom 2.0. The focus of this paper will be to introduce the icerec system. The icerec system has been designed to recognize specially defined student behaviour in a classroom. The pre-defined exceptions include the raising of hands to ask/answer questions, students dozing off during the lecture, taking naps, and students standing up and sitting down. More exceptions may be added as required by educational research. In this study, the students in a classroom are supposed to sit through a lesson, thus student gestures constitute a motion space consisting of the upper body, face, and hands. The PTZ camera is set up in front of the classroom, and three input sequence examples are shown in Figure 1. Firstly, it can be seen that the light from the fluorescent lamps might be different in different regions of the classroom. The detection of students then becomes more complex due to reflections from the chair. Secondly, students in classrooms with a theatre-style setup will be partially occluded by not only others sitting in front of them, but sometimes also by students sitting beside them. Thirdly, the problem of motion segmentation needs to be solved; more than one student may change their gesture in a given frame. Mitra and Acharya [1] divided the gestures arising from different body parts into three classes: (1) hand and arm gestures, (2) head and face gestures, and (3) body gestures. We believe that student gestures comprise of all the above three classes. Wan and Sawada [2] used the probabilistic distribution of the arm trajectory and a fuzzy approach to recognize as many as 30 different hand gestures. The experimental results show that the recognition system can obtain high recognition rates of Rain and Round gestures. However, nine markers should be attached to indicate the feature points of a performer s body. This assumption is difficult to implement in a classroom. Video sequence input Edge detection Main line candidate extraction Horizontal edge extraction Main line identification Importance of edge calculation Output main line location Figure 3. The flowchart of the main line location process.

3 (a) (b) (c) (d) (e) (f) Figure 4. An example of the main line location process. Wang and Mori [3] presented two supervised hierarchical topic models for action recognition based on motion words. Video sequences are identified by a bagof-words representation. In the proposed method, a frame is represented by a single word and this model is trained by a supervised learning process. Finally, five different data sets were tested by Wang and Mori to show that their methods achieved superior results. However, most gesture recognition systems are designed to recognize the gestures of a single person [2,3,4,5,6], and do not consider the partial occlusion problem. Our system has been developed to work on multiple students, taking partial occlusion into account. 2. System Flowchart A flowchart of the student gesture recognition system is shown in Figure 2. Once the video sequence frames have been input into the system, the motion pixels and the candidates of the foreground pixels are detected by the system. These two parameters are helpful in identifying the foreground pixels. On the other hand, certain main lines, which indicate the horizontal line of a row of chairs, will also be located. Using the locations of the main lines as constraints, the foreground regions can be extended by considering the identified foreground pixels as the seeds. These foreground regions can then be combined to segment the foreground objects, which are assumed to represent individual students. Finally, a gesture recognition technique is applied to identify various student gestures. 2.1 Main Line Location Figure3 shows the flowchart of the main line location process, and Figure4 shows an example. Once the video sequence frames have been inputted into the system, the system detects the edges using Sobel s approach. Figure 4(a) shows one of the input frames, and the edge detection result is shown in Figure 4(b). Subsequently, the system extracts the horizontal edges by implementing the morphological opening operation, using a 5X1 horizontal kernel, and is shown in Figure 4(c). The horizontal edges are projected in the horizontal direction to obtain the numbers of the edge pixels, which can be regarded as the importance of the main line edges. The system can extract the main line candidates from the bottom of the frame based on the degree of importance. The red lines shown in Figure 4(d) indicate the main line candidates. The system clusters these candidates into different classes and calculates the average locations of these classes to identify the real positions of the main lines. Only three main lines located in the bottom of the frame are extracted and preserved for the following process, as depicted in Figure 4(e). Finally, the system will calculate the locations of the other main lines using a geometric series, as depicted by the green lines shown in Figure 4(f). 2.2 Motion detection The system can detect motion by subtracting the intensity values of the pixels in the tth frame from the corresponding pixels in the t-1th frame and by calculating the absolute values of the subtraction results. Let the intensity values of a pixel p at time t-1 and t be I t-1 (p) and I t (p), respectively. Therefore, the magnitude of the motion of this pixel can be defined as M( p) I ( p) I 1( p) t t

4 (a) (b) (c) Figure 5. An example of motion detection. Figure 5 shows an example of the process of motion detection. Figures 5(a) and (b) show the t-1th frame and the tth frame respectively, and Figure 5(c) shows the motion detection results. 2.3 Foreground Pixel Extraction and Identification The input frames are represented by the RGB color model of a given pixel p, whose R, G, and B values are R p, Gp, and B p respectively. The translation equation to calculate the hue value h of the pixel in the HIS model is given by: r g r b cos if h 0, for b g r g r b g b h r g r b 2 cos if h,2 for b g r g r b g b Rp Gp Bp r g b Rp Gp Bp Rp Gp Bp Rp Gp Bp where,, and. Moreover, the translation equation to calculate the Cr value of the pixel in the YCrCb model is given by: Cr ( 0.500) R ( ) G ( ) B p p p Based on these two components, the system first accumulates the pixel numbers from the input frame to form a histogram of the hue and Cr values. Figure 6 (c) shows the Hue-Cr histogram of the image in Figure 6 (a). It is assumed that the background occupies larger regions than the foreground in the classroom. Therefore, the system normalizes and sorts these pixel numbers on the histogram. After normalization, the top 40% of pixels are classified as the background pixels, and the bottom 5% pixels are classified as the candidates of the foreground pixels. Subsequently, the system identifies the foreground pixels using motion, color, and template information. Given a pixel p, let M(p) indicate the normalized magnitude of the motion, C(p) be the normalized value of the location of the pixel p in the Hue-Cr histogram, and F t-1 (p) indicate the foreground pixel probability of the pixel p at time t-1. The foreground pixel probability of the pixel p at time t can then be calculated by: Ft ( p) M ( p) C( p) Ft 1( p). where α, β, and γ are constants, and F 0( p) 0. If F t ( p) 0.5 then the pixel p is marked as a foreground pixel, shown in yellow in Figure 6(b). On the other hand, the top 40% pixels are marked as background pixels, No. of pixels Cr value Hue value (a) (b) (c) Figure 6. An example of foreground pixel extraction.

5 (a) (b) (c) Figure 7. An example of region growing. shown in blue in Figure 6(b). 2.4 Region Growing The Main function and the RegionGrowing function of the region growing algorithm are developed to grow regions. The Main function selects the foreground pixels whose y_axis locations are between the maximum and minimum main lines, and sets these selected pixels as the seeds necessary for the growth of the foreground regions. Function Main() { x the set of foreground pixels, and let r = 0, If (the y_axis location of x is between the maximum and minimum main lines){ If (x does not belong to any labelled region){ Label the region number of x as r; RegionGrowing(x); r = r +1; The RegionGrowing function grows the desired regions. Let y h and y' h be the hue values of pixel y and y respectively, and y' s be the saturation value of pixel y. Symbol N y indicates the set of neighbours of y. The RegionGrowing function first selects a pixel y from the SSL_queue, whose region number is r y. All neighbouring pixels, whose properties are similar to those of pixel y, will be classified into the same region. Otherwise, the neighbouring pixels of y will be set as the boundary pixels of region r y. Here, T 1 and T 2 denote the thresholds used to check the pixel properties. Function RegionGrowing(x){ SSL_queue x ; While (SSL_queue is not empty) { Output a pixel y which belongs to region r y from the SSL_queue; y' N y ; If (y is not labelled){ D h yh y' h ; If ( D h T1 and y' the set of background pixels and y' s T2 ){ Label the region number of y as r y ; Update the average hue value of region r y ; RegionGrowing(y ); else label y as a boundary pixel of region r y ; Figure 7 illustrates the results of the region growing algorithm. The input frame is shown in Figure 7(a), and Figure 7(b) shows the distributions of the foreground pixels (yellow) and the background pixels (blue). The result of the region growing algorithm is shown in Figure 7(c). Notice that the foreground regions are successfully bounded by the background pixels. n k w(n k,n k ) n k n k w(n k,n k ) n k (a) (b) Figure 8. Two examples of adapting the link weights. (a) An example depicting an increase in the weight of the link. (b) An example depicting a decrease in the link weight.

6 2.5 Object Segmentation The system segments the objects by the process of region combination. Using a graph to represent an object, each region can be regarded as a node of the graph. If two regions are next to each other, then a link will be added to connect these two corresponding nodes in the graph. The weight of this link represents the strength of the connection between these two nodes. Let the two nodes be denoted by n k and n k respectively. The weight of the link can be defined as max( A( nk ), A( n' k )) w( nk, n' k ) 2L where A ( n k ) and A ( n' k ) represent the areas of nodes n k and n k respectively, and L indicates the distance between the centers of these two nodes. Here, we assume that the height of the seated student is not greater than twice the distance between two adjacent main lines. If the distance is greater than twice the distance between two adjacent main lines, then the weight is set to zero. Moreover, the system increases the weight of the link if the link connects two nodes which have the same neighbors. Figure 8(a) shows an example where the nodes n k and n k have two common neighbors, thus the weight of their link is assigned a value depending on the number of common neighbors. There is also a high probability that these two nodes belong to the same object. On the other hand, the system decreases the weight of the link if the link connects two nodes which do not have any common neighbors. Figure 8(b) shows an example where the nodes n k and n k contain no common neighbors, thus the weight of their link is decreased to a constant value. In addition, there is a low possibility that these two nodes belong to the same object. Figure 9 shows the results of object segmentation. Figure 9(a) shows the original input frame, Figure 9(b) shows the results of region growing, and Figure 9(c) shows the result of object segmentation. It can be seen that the system selects those objects with substantially large areas, while the smaller objects are ignored. 3 Gesture Recognition The system divides student gestures into six classes, which include raising the left hand, raising the right hand, raising two hands, standing up, lying prone, and normal posture. Figure 10 illustrates some examples of these six classes, which can be regarded as states and be constructed to form a finite state machine. We assume that the initial state of the finite automaton is the normal posture. Given an object i, the system can construct a feature t t vector F i at frame t, where F i = (a t i, p t i, m t i, d t i, g t i, c t i ). Here, a t i, p t i, and m t i indicate the area, the center position, and the motion of the object at frame t, respectively. Symbols d t i, g t t i, and c i are vectors which indicate the areas, the center positions, and the colors of the regions belonging to object i, respectively. A change in these feature values at successive frames can translate the finite automaton from one state to another. Thus, we define 14 rules (corresponding to transitions a to l) to translate the states. For example, if the center positions of some left regions of object i move upwards and are larger than a preset threshold value, and the area and center position of object i increase, then the student may be raising his/her left hand. This situation can cause a state transition from normal to raising the left hand through transition a, as shown in Figure Experimental Results The input sequences for our system were acquired using a PTZ camera mounted on a platform and processed on a PC with an IntelRCore 2.186GHz CPU. The PTZ camera is set at a height of approximately 155 cm to 175 cm above the floor. The input video sequences are recorded at a rate of 30 frames/second. Figure 11 shows an example of the student is raising her hand. The first column illustrates three selected frames of this sequence, and the second column shows their corresponding region growing results. Their segmentation results are shown in the third column. In this case the state of the finite state machine will change from normal to raising the right hand through transition b. (a) (b) (c) Figure 9. The result of object segmentation.

7 Raising the left hand g a k f c Raising two hands j Normal i b h l Raising the right hand m d n e Standing up Lying prone Figure 10. A finite state machine consisting of student gestures. Figure 11. A sequence example in which the student is raising her right hand. Table 1 shows the experimental results of the student gesture recognition system. From this table, it can be observed that a total of 159 gestures can be recognized. Of these, 115 gestures were recognized correctly, therefore giving an accuracy of approximately 72%. The accuracy rate of lying prone is higher because this gesture is more independent compared to the other gestures, while the accuracy rate of standing up is lower

8 as the recognition result of this gesture is easily disturbed by others situated behind the student under observation. A total of 43 false positive gestures were also recorded. From the experiments, we conclude that the lighting is a very important factor affecting the accuracy rate of the gesture recognition system, especially the light change caused by the motion of the students. Table 1. The experimental results of the student gesture recognition system. Heading level Total no. of gestures Accuracy rate Correct recognition numbers False positive Raising the left hand 31 71% 22 8 Raising the right hand 33 73% Raising two hands 32 72% 23 5 Lying prone 32 78% 25 2 Standing up 31 68% Total % Conclusions [2] K. Wan and H. Sawada, Dynamic Gesture Recognition Based on the Probabilistic Distribution of Arm Trajectory, Proceedings of International Conference on Mechatronics and Automation, Takamatsu, 2008, [3] Y. Wang and G. Mori, Human Action Recognition by Semilatent Topic Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(10), 2009, [4] J. Alon, V. Athitsos, Q. Yuan and S. Sclaroff,, A Unified Framework for Gesture Recognition and Spatiotemporal Gesture Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(9), 2009, [5] P. T. Bao, N. T. Binh, and T. D. Khoa, A New Approach to Hand Tracking and Gesture Recognition by a New Feature Type and HMM, Proceedings of Sixth International Conference on Fuzzy Systems and Knowledge Discovery, Tianjin, 2009, 3-6. [6] Q. Chen, N. D. Georganas and E. M. Petriu, Hand Gesture Recognition Using Haar-Like Features and a Stochastic Context-Free Grammar, IEEE Transactions on Instrumentation and Measurement, 57(8), 2008, In this paper, we proposed a lecture theatre based student gesture analysis system. The system first locates the main line and identifies the foreground pixels. Using the technique of region growing, the students can be segmented. Then, a finite state machine is used to recognize various student gestures. Currently, the accuracy rate of the proposed system is 72%. We believe that the accuracy rate can be increased with the incorporation of some prior knowledge into the system. This is a new attempt to apply image processing techniques to help the teacher to notice some behaviours of the students in classroom. We hope the system can be improved and practiced in the future. Moreover, since more than one student may change their gestures at the same time, using multiple PTZ cameras to detect multiple student gestures simultaneously may be a good area of research for the future. Acknowledgment The authors would like to thank the National Science Council of the Republic of China, Taiwan for financially supporting this research under Contract No. NSC E MY2 and NSC S References [1] S. Mitra and T. Acharya, Gesture Recognition: A Survey, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 37(3) 2007,

Extracting Road Signs using the Color Information

Extracting Road Signs using the Color Information Extracting Road Signs using the Color Information Wen-Yen Wu, Tsung-Cheng Hsieh, and Ching-Sung Lai Abstract In this paper, we propose a method to extract the road signs. Firstly, the grabbed image is

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

A Feature Point Matching Based Approach for Video Objects Segmentation

A Feature Point Matching Based Approach for Video Objects Segmentation A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer

More information

AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE

AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE Md. Akhtaruzzaman, Amir A. Shafie and Md. Raisuddin Khan Department of Mechatronics Engineering, Kulliyyah of Engineering, International

More information

Gesture based PTZ camera control

Gesture based PTZ camera control Gesture based PTZ camera control Report submitted in May 2014 to the department of Computer Science and Engineering of National Institute of Technology Rourkela in partial fulfillment of the requirements

More information

Object Recognition Tools for Educational Robots

Object Recognition Tools for Educational Robots Object Recognition Tools for Educational Robots Xinghao Pan Advised by Professor David S. Touretzky Senior Research Thesis School of Computer Science Carnegie Mellon University May 2, 2008 Abstract SIFT

More information

A Video-Based Face Detection Method Using a Graph Cut Algorithm in Classrooms

A Video-Based Face Detection Method Using a Graph Cut Algorithm in Classrooms A Video-Based Face Detection Method Using a Graph Cut Algorithm in Classrooms Jiun-Lin Guo, Chiung-Yao Fang*, Yi-Chun Li, and ei-wang Chen Department of Computer cience and Information Engineering National

More information

Ulrik Söderström 16 Feb Image Processing. Segmentation

Ulrik Söderström 16 Feb Image Processing. Segmentation Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background

More information

ORDER-INVARIANT TOBOGGAN ALGORITHM FOR IMAGE SEGMENTATION

ORDER-INVARIANT TOBOGGAN ALGORITHM FOR IMAGE SEGMENTATION ORDER-INVARIANT TOBOGGAN ALGORITHM FOR IMAGE SEGMENTATION Yung-Chieh Lin( ), Yi-Ping Hung( ), Chiou-Shann Fuh( ) Institute of Information Science, Academia Sinica, Taipei, Taiwan Department of Computer

More information

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE Hongyu Liang, Jinchen Wu, and Kaiqi Huang National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science

More information

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Video Inter-frame Forgery Identification Based on Optical Flow Consistency Sensors & Transducers 24 by IFSA Publishing, S. L. http://www.sensorsportal.com Video Inter-frame Forgery Identification Based on Optical Flow Consistency Qi Wang, Zhaohong Li, Zhenzhen Zhang, Qinglong

More information

A Vision-based Safety Driver Assistance System for Motorcycles on a Smartphone

A Vision-based Safety Driver Assistance System for Motorcycles on a Smartphone 014 IEEE 17th International Conference on Intelligent Transportation Systems (ITSC) October 8-11 014. Qingdao China A Vision-based Safety Driver Assistance System for Motorcycles on a Smartphone Chiung-Yao

More information

A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion

A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion Paper ID:086 Abstract Multi-view approach has been proposed to solve occlusion and lack of visibility

More information

Road Sign Detection and Tracking from Complex Background

Road Sign Detection and Tracking from Complex Background Road Sign Detection and Tracking from Complex Background By Chiung-Yao Fang( 方瓊瑤 ), Sei-Wang Chen( 陳世旺 ), and Chiou-Shann Fuh( 傅楸善 ) Department of Information and Computer Education National Taiwan Normal

More information

Introduction to Medical Imaging (5XSA0) Module 5

Introduction to Medical Imaging (5XSA0) Module 5 Introduction to Medical Imaging (5XSA0) Module 5 Segmentation Jungong Han, Dirk Farin, Sveta Zinger ( s.zinger@tue.nl ) 1 Outline Introduction Color Segmentation region-growing region-merging watershed

More information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,

More information

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm.

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm. Volume 7, Issue 5, May 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Hand Gestures Recognition

More information

A Statistical approach to line segmentation in handwritten documents

A Statistical approach to line segmentation in handwritten documents A Statistical approach to line segmentation in handwritten documents Manivannan Arivazhagan, Harish Srinivasan and Sargur Srihari Center of Excellence for Document Analysis and Recognition (CEDAR) University

More information

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs

More information

Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm

Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm Mahmoud Saeid Khadijeh Saeid Mahmoud Khaleghi Abstract In this paper, we propose a novel spatiotemporal fuzzy based algorithm for noise

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

Object Tracking Algorithm based on Combination of Edge and Color Information

Object Tracking Algorithm based on Combination of Edge and Color Information Object Tracking Algorithm based on Combination of Edge and Color Information 1 Hsiao-Chi Ho ( 賀孝淇 ), 2 Chiou-Shann Fuh ( 傅楸善 ), 3 Feng-Li Lian ( 連豊力 ) 1 Dept. of Electronic Engineering National Taiwan

More information

Motion in 2D image sequences

Motion in 2D image sequences Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or activities Segmentation and understanding of video sequences

More information

Image Segmentation Based on Watershed and Edge Detection Techniques

Image Segmentation Based on Watershed and Edge Detection Techniques 0 The International Arab Journal of Information Technology, Vol., No., April 00 Image Segmentation Based on Watershed and Edge Detection Techniques Nassir Salman Computer Science Department, Zarqa Private

More information

Detecting and Identifying Moving Objects in Real-Time

Detecting and Identifying Moving Objects in Real-Time Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

Background subtraction in people detection framework for RGB-D cameras

Background subtraction in people detection framework for RGB-D cameras Background subtraction in people detection framework for RGB-D cameras Anh-Tuan Nghiem, Francois Bremond INRIA-Sophia Antipolis 2004 Route des Lucioles, 06902 Valbonne, France nghiemtuan@gmail.com, Francois.Bremond@inria.fr

More information

Static Gesture Recognition with Restricted Boltzmann Machines

Static Gesture Recognition with Restricted Boltzmann Machines Static Gesture Recognition with Restricted Boltzmann Machines Peter O Donovan Department of Computer Science, University of Toronto 6 Kings College Rd, M5S 3G4, Canada odonovan@dgp.toronto.edu Abstract

More information

Vehicle Detection Method using Haar-like Feature on Real Time System

Vehicle Detection Method using Haar-like Feature on Real Time System Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.

More information

Traffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System

Traffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System Sept. 8-10, 010, Kosice, Slovakia Traffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System Martin FIFIK 1, Ján TURÁN 1, Ľuboš OVSENÍK 1 1 Department of Electronics and

More information

Mobile Camera Based Text Detection and Translation

Mobile Camera Based Text Detection and Translation Mobile Camera Based Text Detection and Translation Derek Ma Qiuhau Lin Tong Zhang Department of Electrical EngineeringDepartment of Electrical EngineeringDepartment of Mechanical Engineering Email: derekxm@stanford.edu

More information

A Noise-Robust and Adaptive Image Segmentation Method based on Splitting and Merging method

A Noise-Robust and Adaptive Image Segmentation Method based on Splitting and Merging method A Noise-Robust and Adaptive Image Segmentation Method based on Splitting and Merging method Ryu Hyunki, Lee HaengSuk Kyungpook Research Institute of Vehicle Embedded Tech. 97-70, Myeongsan-gil, YeongCheon,

More information

Texture Sensitive Image Inpainting after Object Morphing

Texture Sensitive Image Inpainting after Object Morphing Texture Sensitive Image Inpainting after Object Morphing Yin Chieh Liu and Yi-Leh Wu Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan

More information

Image Segmentation Via Iterative Geodesic Averaging

Image Segmentation Via Iterative Geodesic Averaging Image Segmentation Via Iterative Geodesic Averaging Asmaa Hosni, Michael Bleyer and Margrit Gelautz Institute for Software Technology and Interactive Systems, Vienna University of Technology Favoritenstr.

More information

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,

More information

Stereo Vision Image Processing Strategy for Moving Object Detecting

Stereo Vision Image Processing Strategy for Moving Object Detecting Stereo Vision Image Processing Strategy for Moving Object Detecting SHIUH-JER HUANG, FU-REN YING Department of Mechanical Engineering National Taiwan University of Science and Technology No. 43, Keelung

More information

Histogram and watershed based segmentation of color images

Histogram and watershed based segmentation of color images Histogram and watershed based segmentation of color images O. Lezoray H. Cardot LUSAC EA 2607 IUT Saint-Lô, 120 rue de l'exode, 50000 Saint-Lô, FRANCE Abstract A novel method for color image segmentation

More information

CIS UDEL Working Notes on ImageCLEF 2015: Compound figure detection task

CIS UDEL Working Notes on ImageCLEF 2015: Compound figure detection task CIS UDEL Working Notes on ImageCLEF 2015: Compound figure detection task Xiaolong Wang, Xiangying Jiang, Abhishek Kolagunda, Hagit Shatkay and Chandra Kambhamettu Department of Computer and Information

More information

Study on the Signboard Region Detection in Natural Image

Study on the Signboard Region Detection in Natural Image , pp.179-184 http://dx.doi.org/10.14257/astl.2016.140.34 Study on the Signboard Region Detection in Natural Image Daeyeong Lim 1, Youngbaik Kim 2, Incheol Park 1, Jihoon seung 1, Kilto Chong 1,* 1 1567

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT

EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT Fan ZHANG*, Xianfeng HUANG, Xiaoguang CHENG, Deren LI State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing,

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

An Approach for Real Time Moving Object Extraction based on Edge Region Determination An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Motion Detection Algorithm

Motion Detection Algorithm Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection

More information

Adaptive Feature Extraction with Haar-like Features for Visual Tracking

Adaptive Feature Extraction with Haar-like Features for Visual Tracking Adaptive Feature Extraction with Haar-like Features for Visual Tracking Seunghoon Park Adviser : Bohyung Han Pohang University of Science and Technology Department of Computer Science and Engineering pclove1@postech.ac.kr

More information

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES D. Beloborodov a, L. Mestetskiy a a Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,

More information

EXTRACTING TEXT FROM VIDEO

EXTRACTING TEXT FROM VIDEO EXTRACTING TEXT FROM VIDEO Jayshree Ghorpade 1, Raviraj Palvankar 2, Ajinkya Patankar 3 and Snehal Rathi 4 1 Department of Computer Engineering, MIT COE, Pune, India jayshree.aj@gmail.com 2 Department

More information

Perceptual Quality Improvement of Stereoscopic Images

Perceptual Quality Improvement of Stereoscopic Images Perceptual Quality Improvement of Stereoscopic Images Jong In Gil and Manbae Kim Dept. of Computer and Communications Engineering Kangwon National University Chunchon, Republic of Korea, 200-701 E-mail:

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

AV Guide for 2306 McGavran-Greenberg

AV Guide for 2306 McGavran-Greenberg AV Guide for 2306 McGavran-Greenberg AV Services: (919) 966-6536, Rosenau 233 Table of Contents (click on a topic to skip to that section) Getting Started... 2 To Display the Computer Desktop... 4 To Display

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol.2, II-131 II-137, Dec. 2001. Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

More information

QUT Digital Repository: This is the author version published as:

QUT Digital Repository:   This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chen,

More information

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING ФУНДАМЕНТАЛЬНЫЕ НАУКИ Информатика 9 ИНФОРМАТИКА UDC 6813 OTION DETECTION IN VIDEO STREA BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING R BOGUSH, S ALTSEV, N BROVKO, E IHAILOV (Polotsk State University

More information

Introducing Robotics Vision System to a Manufacturing Robotics Course

Introducing Robotics Vision System to a Manufacturing Robotics Course Paper ID #16241 Introducing Robotics Vision System to a Manufacturing Robotics Course Dr. Yuqiu You, Ohio University c American Society for Engineering Education, 2016 Introducing Robotics Vision System

More information

Motion Detection. Final project by. Neta Sokolovsky

Motion Detection. Final project by. Neta Sokolovsky Motion Detection Final project by Neta Sokolovsky Introduction The goal of this project is to recognize a motion of objects found in the two given images. This functionality is useful in the video processing

More information

Detection of a Hand Holding a Cellular Phone Using Multiple Image Features

Detection of a Hand Holding a Cellular Phone Using Multiple Image Features Detection of a Hand Holding a Cellular Phone Using Multiple Image Features Hiroto Nagayoshi 1, Takashi Watanabe 1, Tatsuhiko Kagehiro 1, Hisao Ogata 2, Tsukasa Yasue 2,andHiroshiSako 1 1 Central Research

More information

Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features

Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features Pat Jangyodsuk Department of Computer Science and Engineering The University

More information

A New Approach for Automatic Thesaurus Construction and Query Expansion for Document Retrieval

A New Approach for Automatic Thesaurus Construction and Query Expansion for Document Retrieval Information and Management Sciences Volume 18, Number 4, pp. 299-315, 2007 A New Approach for Automatic Thesaurus Construction and Query Expansion for Document Retrieval Liang-Yu Chen National Taiwan University

More information

Image Segmentation. Schedule. Jesus J Caban 11/2/10. Monday: Today: Image Segmentation Topic : Matting ( P. Bindu ) Assignment #3 distributed

Image Segmentation. Schedule. Jesus J Caban 11/2/10. Monday: Today: Image Segmentation Topic : Matting ( P. Bindu ) Assignment #3 distributed Image Segmentation Jesus J Caban Today: Schedule Image Segmentation Topic : Matting ( P. Bindu ) Assignment #3 distributed Monday: Revised proposal due Topic: Image Warping ( K. Martinez ) Topic: Image

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

Sensor Fusion-Based Parking Assist System

Sensor Fusion-Based Parking Assist System Sensor Fusion-Based Parking Assist System 2014-01-0327 Jaeseob Choi, Eugene Chang, Daejoong Yoon, and Seongsook Ryu Hyundai & Kia Corp. Hogi Jung and Jaekyu Suhr Hanyang Univ. Published 04/01/2014 CITATION:

More information

12/19/2016. Types of Education Media. Print Media. Projected & non-projected media. Audio, Visual & Audiovisual. Media

12/19/2016. Types of Education Media. Print Media. Projected & non-projected media. Audio, Visual & Audiovisual. Media Types of Education Media Education Media There are different ways to classify media Print media, non-print media and electronic media Print Media They include: books, journals, magazines, newspapers, workbook,

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information

A Fuzzy Colour Image Segmentation Applied to Robot Vision

A Fuzzy Colour Image Segmentation Applied to Robot Vision 1 A Fuzzy Colour Image Segmentation Applied to Robot Vision J. Chamorro-Martínez, D. Sánchez and B. Prados-Suárez Department of Computer Science and Artificial Intelligence, University of Granada C/ Periodista

More information

A reversible data hiding based on adaptive prediction technique and histogram shifting

A reversible data hiding based on adaptive prediction technique and histogram shifting A reversible data hiding based on adaptive prediction technique and histogram shifting Rui Liu, Rongrong Ni, Yao Zhao Institute of Information Science Beijing Jiaotong University E-mail: rrni@bjtu.edu.cn

More information

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Ahmad Ali Abin, Mehran Fotouhi, Shohreh Kasaei, Senior Member, IEEE Sharif University of Technology, Tehran, Iran abin@ce.sharif.edu,

More information

A Proposed Model For Forecasting Stock Markets Based On Clustering Algorithm And Fuzzy Time Series

A Proposed Model For Forecasting Stock Markets Based On Clustering Algorithm And Fuzzy Time Series Journal of Multidisciplinary Engineering Science Studies (JMESS) A Proposed Model For Forecasting Stock Markets Based On Clustering Algorithm And Fuzzy Time Series Nghiem Van Tinh Thai Nguyen University

More information

Human Body Shape Deformation from. Front and Side Images

Human Body Shape Deformation from. Front and Side Images Human Body Shape Deformation from Front and Side Images Yueh-Ling Lin 1 and Mao-Jiun J. Wang 2 Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan

More information

Segmentation of Kannada Handwritten Characters and Recognition Using Twelve Directional Feature Extraction Techniques

Segmentation of Kannada Handwritten Characters and Recognition Using Twelve Directional Feature Extraction Techniques Segmentation of Kannada Handwritten Characters and Recognition Using Twelve Directional Feature Extraction Techniques 1 Lohitha B.J, 2 Y.C Kiran 1 M.Tech. Student Dept. of ISE, Dayananda Sagar College

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models Gleidson Pegoretti da Silva, Masaki Nakagawa Department of Computer and Information Sciences Tokyo University

More information

Understanding Tracking and StroMotion of Soccer Ball

Understanding Tracking and StroMotion of Soccer Ball Understanding Tracking and StroMotion of Soccer Ball Nhat H. Nguyen Master Student 205 Witherspoon Hall Charlotte, NC 28223 704 656 2021 rich.uncc@gmail.com ABSTRACT Soccer requires rapid ball movements.

More information

Bioimage Informatics

Bioimage Informatics Bioimage Informatics Lecture 14, Spring 2012 Bioimage Data Analysis (IV) Image Segmentation (part 3) Lecture 14 March 07, 2012 1 Outline Review: intensity thresholding based image segmentation Morphological

More information

An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance *

An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance * An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance * Xinguo Yu, Wu Song, Jun Cheng, Bo Qiu, and Bin He National Engineering Research Center for E-Learning, Central China Normal

More information

How many toothpicks are needed for her second pattern? How many toothpicks are needed for her third pattern?

How many toothpicks are needed for her second pattern? How many toothpicks are needed for her third pattern? Problem of the Month Tri - Triangles Level A: Lisa is making triangle patterns out of toothpicks all the same length. A triangle is made from three toothpicks. Her first pattern is a single triangle. Her

More information

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction Chieh-Chih Wang and Ko-Chih Wang Department of Computer Science and Information Engineering Graduate Institute of Networking

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Towards Knowledge-Based Extraction of Roads from 1m-resolution Satellite Images

Towards Knowledge-Based Extraction of Roads from 1m-resolution Satellite Images Towards Knowledge-Based Extraction of Roads from 1m-resolution Satellite Images Hae Yeoun Lee* Wonkyu Park** Heung-Kyu Lee* Tak-gon Kim*** * Dept. of Computer Science, Korea Advanced Institute of Science

More information

The Application of Image Processing to Solve Occlusion Issue in Object Tracking

The Application of Image Processing to Solve Occlusion Issue in Object Tracking The Application of Image Processing to Solve Occlusion Issue in Object Tracking Yun Zhe Cheong 1 and Wei Jen Chew 1* 1 School of Engineering, Taylor s University, 47500 Subang Jaya, Selangor, Malaysia.

More information

Performance analysis of robust road sign identification

Performance analysis of robust road sign identification IOP Conference Series: Materials Science and Engineering OPEN ACCESS Performance analysis of robust road sign identification To cite this article: Nursabillilah M Ali et al 2013 IOP Conf. Ser.: Mater.

More information

A Comparison of Color Models for Color Face Segmentation

A Comparison of Color Models for Color Face Segmentation Available online at www.sciencedirect.com Procedia Technology 7 ( 2013 ) 134 141 A Comparison of Color Models for Color Face Segmentation Manuel C. Sanchez-Cuevas, Ruth M. Aguilar-Ponce, J. Luis Tecpanecatl-Xihuitl

More information

AV Guide for 2308 McGavran-Greenberg

AV Guide for 2308 McGavran-Greenberg AV Guide for 2308 McGavran-Greenberg AV Services: (919) 966-6536, Rosenau 233 Table of Contents (click on a topic to skip to that section) Getting Started... 2 To Display the Computer Desktop... 4 To Display

More information

A Modified Mean Shift Algorithm for Visual Object Tracking

A Modified Mean Shift Algorithm for Visual Object Tracking A Modified Mean Shift Algorithm for Visual Object Tracking Shu-Wei Chou 1, Chaur-Heh Hsieh 2, Bor-Jiunn Hwang 3, Hown-Wen Chen 4 Department of Computer and Communication Engineering, Ming-Chuan University,

More information

Change detection using joint intensity histogram

Change detection using joint intensity histogram Change detection using joint intensity histogram Yasuyo Kita National Institute of Advanced Industrial Science and Technology (AIST) Information Technology Research Institute AIST Tsukuba Central 2, 1-1-1

More information

An adaptive container code character segmentation algorithm Yajie Zhu1, a, Chenglong Liang2, b

An adaptive container code character segmentation algorithm Yajie Zhu1, a, Chenglong Liang2, b 6th International Conference on Machinery, Materials, Environment, Biotechnology and Computer (MMEBC 2016) An adaptive container code character segmentation algorithm Yajie Zhu1, a, Chenglong Liang2, b

More information

Fingertips Tracking based on Gradient Vector

Fingertips Tracking based on Gradient Vector Int. J. Advance Soft Compu. Appl, Vol. 7, No. 3, November 2015 ISSN 2074-8523 Fingertips Tracking based on Gradient Vector Ahmad Yahya Dawod 1, Md Jan Nordin 1, and Junaidi Abdullah 2 1 Pattern Recognition

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Chapter 12 3D Localisation and High-Level Processing

Chapter 12 3D Localisation and High-Level Processing Chapter 12 3D Localisation and High-Level Processing This chapter describes how the results obtained from the moving object tracking phase are used for estimating the 3D location of objects, based on the

More information

Texture Segmentation by Windowed Projection

Texture Segmentation by Windowed Projection Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw

More information

Preceding vehicle detection and distance estimation. lane change, warning system.

Preceding vehicle detection and distance estimation. lane change, warning system. Preceding vehicle detection and distance estimation for lane change warning system U. Iqbal, M.S. Sarfraz Computer Vision Research Group (COMVis) Department of Electrical Engineering, COMSATS Institute

More information

Player Detection and Tracking in Broadcast Tennis Video

Player Detection and Tracking in Broadcast Tennis Video Player Detection and Tracking in Broadcast Tennis Video Yao-Chuan Jiang 1, Kuan-Ting Lai 2, Chaur-Heh Hsieh 3, and Mau-Fu Lai 4 1 I-Shou University, Kaohsiung County, Taiwan, R.O.C. m9503023@stmail.isu.edu.tw

More information

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical

More information

Vehicle Logo Recognition using Image Matching and Textural Features

Vehicle Logo Recognition using Image Matching and Textural Features Vehicle Logo Recognition using Image Matching and Textural Features Nacer Farajzadeh Faculty of IT and Computer Engineering Azarbaijan Shahid Madani University Tabriz, Iran n.farajzadeh@azaruniv.edu Negin

More information

An indirect tire identification method based on a two-layered fuzzy scheme

An indirect tire identification method based on a two-layered fuzzy scheme Journal of Intelligent & Fuzzy Systems 29 (2015) 2795 2800 DOI:10.3233/IFS-151984 IOS Press 2795 An indirect tire identification method based on a two-layered fuzzy scheme Dailin Zhang, Dengming Zhang,

More information