STUDENT GESTURE RECOGNITION SYSTEM IN CLASSROOM 2.0

Similar documents
Extracting Road Signs using the Color Information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

Robust color segmentation algorithms in illumination variation conditions

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

A Feature Point Matching Based Approach for Video Objects Segmentation

AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE

Gesture based PTZ camera control

Object Recognition Tools for Educational Robots

A Video-Based Face Detection Method Using a Graph Cut Algorithm in Classrooms

Ulrik Söderström 16 Feb Image Processing. Segmentation

ORDER-INVARIANT TOBOGGAN ALGORITHM FOR IMAGE SEGMENTATION

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

A Vision-based Safety Driver Assistance System for Motorcycles on a Smartphone

A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion

Road Sign Detection and Tracking from Complex Background

Introduction to Medical Imaging (5XSA0) Module 5

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm.

A Statistical approach to line segmentation in handwritten documents

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method

Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm

Optical Flow-Based Person Tracking by Multiple Cameras

Object Tracking Algorithm based on Combination of Edge and Color Information

Motion in 2D image sequences

Image Segmentation Based on Watershed and Edge Detection Techniques

Detecting and Identifying Moving Objects in Real-Time

A Novel Algorithm for Color Image matching using Wavelet-SIFT

Background subtraction in people detection framework for RGB-D cameras

Static Gesture Recognition with Restricted Boltzmann Machines

Vehicle Detection Method using Haar-like Feature on Real Time System

Traffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System

Mobile Camera Based Text Detection and Translation

A Noise-Robust and Adaptive Image Segmentation Method based on Splitting and Merging method

Texture Sensitive Image Inpainting after Object Morphing

Image Segmentation Via Iterative Geodesic Averaging

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos

Stereo Vision Image Processing Strategy for Moving Object Detecting

Histogram and watershed based segmentation of color images

CIS UDEL Working Notes on ImageCLEF 2015: Compound figure detection task

Study on the Signboard Region Detection in Natural Image

Automatic Shadow Removal by Illuminance in HSV Color Space

EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT

Short Survey on Static Hand Gesture Recognition

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Motion Detection Algorithm

Adaptive Feature Extraction with Haar-like Features for Visual Tracking

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

EXTRACTING TEXT FROM VIDEO

Perceptual Quality Improvement of Stereoscopic Images

Robot localization method based on visual features and their geometric relationship

AV Guide for 2306 McGavran-Greenberg

Segmentation of Images

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

QUT Digital Repository: This is the author version published as:

ФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING

Introducing Robotics Vision System to a Manufacturing Robotics Course

Motion Detection. Final project by. Neta Sokolovsky

Detection of a Hand Holding a Cellular Phone Using Multiple Image Features

Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features

A New Approach for Automatic Thesaurus Construction and Query Expansion for Document Retrieval

Image Segmentation. Schedule. Jesus J Caban 11/2/10. Monday: Today: Image Segmentation Topic : Matting ( P. Bindu ) Assignment #3 distributed

Human Motion Detection and Tracking for Video Surveillance

2 OVERVIEW OF RELATED WORK

Sensor Fusion-Based Parking Assist System

12/19/2016. Types of Education Media. Print Media. Projected & non-projected media. Audio, Visual & Audiovisual. Media

CS4670: Computer Vision

A Fuzzy Colour Image Segmentation Applied to Robot Vision

A reversible data hiding based on adaptive prediction technique and histogram shifting

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains

A Proposed Model For Forecasting Stock Markets Based On Clustering Algorithm And Fuzzy Time Series

Human Body Shape Deformation from. Front and Side Images

Segmentation of Kannada Handwritten Characters and Recognition Using Twelve Directional Feature Extraction Techniques

Chapter 3 Image Registration. Chapter 3 Image Registration

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models

Understanding Tracking and StroMotion of Soccer Ball

Bioimage Informatics

An Automatic Timestamp Replanting Algorithm for Panorama Video Surveillance *

How many toothpicks are needed for her second pattern? How many toothpicks are needed for her third pattern?

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction

Generic Face Alignment Using an Improved Active Shape Model

Towards Knowledge-Based Extraction of Roads from 1m-resolution Satellite Images

The Application of Image Processing to Solve Occlusion Issue in Object Tracking

Performance analysis of robust road sign identification

A Comparison of Color Models for Color Face Segmentation

AV Guide for 2308 McGavran-Greenberg

A Modified Mean Shift Algorithm for Visual Object Tracking

Change detection using joint intensity histogram

An adaptive container code character segmentation algorithm Yajie Zhu1, a, Chenglong Liang2, b

Fingertips Tracking based on Gradient Vector

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Chapter 12 3D Localisation and High-Level Processing

Texture Segmentation by Windowed Projection

Preceding vehicle detection and distance estimation. lane change, warning system.

Player Detection and Tracking in Broadcast Tennis Video

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

Vehicle Logo Recognition using Image Matching and Textural Features

An indirect tire identification method based on a two-layered fuzzy scheme

Transcription:

STUDENT GESTURE RECOGNITION SYSTEM IN CLASSROOM 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee and Sei-Wang Chen Department of Computer Science and Information Engineering, National Taiwan Normal University No. 88, Section 4, Ting-Chou Road, Taipei, 116, Taiwan, R. O. C. violet@csie.ntnu.edu.tw; babulakau@hotmail.com; leeg@csie.ntnu.edu.tw; schen@csie.ntnu.edu.tw ABSTRACT This paper presents a student gesture recognition system employed in a theatre classroom, which is a subsystem belonging to Classroom 2.0. In this study, a PTZ camera is set up at the front of the classroom to capture video sequences. The system first pre-processes the input sequence to locate the main line of the theatre classroom and to extract the candidates from the foreground pixels. Subsequently, motion and color information is utilized to identify the foreground pixels, which are regarded as the seeds of growth for the foreground regions. The system combines the foreground regions to segment the objects, which represent individual students. Six student gestures, which include raising the right hand, raising the left hand, raising two hands, lying prone, standing up and normal, are classified based on the relationship between the regions in similar objects. The experimental results demonstrate that the proposed method is robust and efficient. KEY WORDS Smart classroom, gesture recognition, object segmentation. 1. Introduction Smart Classroom, a term proposed recently, usually refers to a room housing an instructor station, equipped with a computer and audiovisual equipment, which allows the instructor to teach using a wide variety of media. In Taiwan, the instructor station is referred to as an e-station, which typically composes of a PC with software, a handwriting tablet, an external DVD player, an amplifier, a set of speakers, a set of wired or wireless microphones and an unified control panel. This kind of Smart Classroom, however, is in reality not an example of a smart system. In this paper, we propose the concept of a truly smart classroom, which we will name Classroom 2.0. In our view, a Smart Classroom such as Classroom 2.0 should be intelligent, interactive, individualized, and integrated. In the following paragraphs, we will briefly introduce the properties and relative systems of Classroom 2.0. Intelligent: the classroom technology should automatically conduct tasks that do not require human intervention. To enable this, the following four systems should be employed in the classroom: (1)The intelligent Roll-Call (irollcall) system: the system can automatically recognize students who are present in the classroom. (2)The intelligent Teaching Response (itres) system: the system can automatically control the technologies used in the classroom, including the control of the software systems. (3)The intelligent Classroom Exception Recognition (icerec) system: the system can automatically identify and track human behavior in the classroom. This system is intended to automatically track the behavior of students in a classroom and is a revolutionary way of collecting data for educational research. (4)The intelligent Content Retrieval (icore) system: the system can automatically give feedback on the quality of an answer to a given question. Interactive: the classroom technology should facilitate interaction between the instructor and the students. Figure 1. Example of three input sequences.

Video sequence input Motion detection Foreground pixel extraction Main line location Foreground pixel identification Region growing Object segmentation Gesture recognition Figure 2. Flowchart of the student gesture recognition system. Individualize: the classroom technology should react in accordance with the behavior of individual users. Integrated: the classroom technologies could be integrated as a single i 4 system instead of being made up of separate systems. From the above description, it is evident that irollcall, itres, icerec, and icore are the four kernel systems of Classroom 2.0. The focus of this paper will be to introduce the icerec system. The icerec system has been designed to recognize specially defined student behaviour in a classroom. The pre-defined exceptions include the raising of hands to ask/answer questions, students dozing off during the lecture, taking naps, and students standing up and sitting down. More exceptions may be added as required by educational research. In this study, the students in a classroom are supposed to sit through a lesson, thus student gestures constitute a motion space consisting of the upper body, face, and hands. The PTZ camera is set up in front of the classroom, and three input sequence examples are shown in Figure 1. Firstly, it can be seen that the light from the fluorescent lamps might be different in different regions of the classroom. The detection of students then becomes more complex due to reflections from the chair. Secondly, students in classrooms with a theatre-style setup will be partially occluded by not only others sitting in front of them, but sometimes also by students sitting beside them. Thirdly, the problem of motion segmentation needs to be solved; more than one student may change their gesture in a given frame. Mitra and Acharya [1] divided the gestures arising from different body parts into three classes: (1) hand and arm gestures, (2) head and face gestures, and (3) body gestures. We believe that student gestures comprise of all the above three classes. Wan and Sawada [2] used the probabilistic distribution of the arm trajectory and a fuzzy approach to recognize as many as 30 different hand gestures. The experimental results show that the recognition system can obtain high recognition rates of Rain and Round gestures. However, nine markers should be attached to indicate the feature points of a performer s body. This assumption is difficult to implement in a classroom. Video sequence input Edge detection Main line candidate extraction Horizontal edge extraction Main line identification Importance of edge calculation Output main line location Figure 3. The flowchart of the main line location process.

(a) (b) (c) (d) (e) (f) Figure 4. An example of the main line location process. Wang and Mori [3] presented two supervised hierarchical topic models for action recognition based on motion words. Video sequences are identified by a bagof-words representation. In the proposed method, a frame is represented by a single word and this model is trained by a supervised learning process. Finally, five different data sets were tested by Wang and Mori to show that their methods achieved superior results. However, most gesture recognition systems are designed to recognize the gestures of a single person [2,3,4,5,6], and do not consider the partial occlusion problem. Our system has been developed to work on multiple students, taking partial occlusion into account. 2. System Flowchart A flowchart of the student gesture recognition system is shown in Figure 2. Once the video sequence frames have been input into the system, the motion pixels and the candidates of the foreground pixels are detected by the system. These two parameters are helpful in identifying the foreground pixels. On the other hand, certain main lines, which indicate the horizontal line of a row of chairs, will also be located. Using the locations of the main lines as constraints, the foreground regions can be extended by considering the identified foreground pixels as the seeds. These foreground regions can then be combined to segment the foreground objects, which are assumed to represent individual students. Finally, a gesture recognition technique is applied to identify various student gestures. 2.1 Main Line Location Figure3 shows the flowchart of the main line location process, and Figure4 shows an example. Once the video sequence frames have been inputted into the system, the system detects the edges using Sobel s approach. Figure 4(a) shows one of the input frames, and the edge detection result is shown in Figure 4(b). Subsequently, the system extracts the horizontal edges by implementing the morphological opening operation, using a 5X1 horizontal kernel, and is shown in Figure 4(c). The horizontal edges are projected in the horizontal direction to obtain the numbers of the edge pixels, which can be regarded as the importance of the main line edges. The system can extract the main line candidates from the bottom of the frame based on the degree of importance. The red lines shown in Figure 4(d) indicate the main line candidates. The system clusters these candidates into different classes and calculates the average locations of these classes to identify the real positions of the main lines. Only three main lines located in the bottom of the frame are extracted and preserved for the following process, as depicted in Figure 4(e). Finally, the system will calculate the locations of the other main lines using a geometric series, as depicted by the green lines shown in Figure 4(f). 2.2 Motion detection The system can detect motion by subtracting the intensity values of the pixels in the tth frame from the corresponding pixels in the t-1th frame and by calculating the absolute values of the subtraction results. Let the intensity values of a pixel p at time t-1 and t be I t-1 (p) and I t (p), respectively. Therefore, the magnitude of the motion of this pixel can be defined as M( p) I ( p) I 1( p) t t

(a) (b) (c) Figure 5. An example of motion detection. Figure 5 shows an example of the process of motion detection. Figures 5(a) and (b) show the t-1th frame and the tth frame respectively, and Figure 5(c) shows the motion detection results. 2.3 Foreground Pixel Extraction and Identification The input frames are represented by the RGB color model of a given pixel p, whose R, G, and B values are R p, Gp, and B p respectively. The translation equation to calculate the hue value h of the pixel in the HIS model is given by: 1 0.5 r g r b cos if h 0, for b g 2 1 2 r g r b g b h 1 0.5 r g r b 2 cos if h,2 for b g 2 1 2 r g r b g b Rp Gp Bp r g b Rp Gp Bp Rp Gp Bp Rp Gp Bp where,, and. Moreover, the translation equation to calculate the Cr value of the pixel in the YCrCb model is given by: Cr ( 0.500) R ( 0.4187) G ( 0.0813) B p p p Based on these two components, the system first accumulates the pixel numbers from the input frame to form a histogram of the hue and Cr values. Figure 6 (c) shows the Hue-Cr histogram of the image in Figure 6 (a). It is assumed that the background occupies larger regions than the foreground in the classroom. Therefore, the system normalizes and sorts these pixel numbers on the histogram. After normalization, the top 40% of pixels are classified as the background pixels, and the bottom 5% pixels are classified as the candidates of the foreground pixels. Subsequently, the system identifies the foreground pixels using motion, color, and template information. Given a pixel p, let M(p) indicate the normalized magnitude of the motion, C(p) be the normalized value of the location of the pixel p in the Hue-Cr histogram, and F t-1 (p) indicate the foreground pixel probability of the pixel p at time t-1. The foreground pixel probability of the pixel p at time t can then be calculated by: Ft ( p) M ( p) C( p) Ft 1( p). where α, β, and γ are constants, and F 0( p) 0. If F t ( p) 0.5 then the pixel p is marked as a foreground pixel, shown in yellow in Figure 6(b). On the other hand, the top 40% pixels are marked as background pixels, No. of pixels Cr value Hue value (a) (b) (c) Figure 6. An example of foreground pixel extraction.

(a) (b) (c) Figure 7. An example of region growing. shown in blue in Figure 6(b). 2.4 Region Growing The Main function and the RegionGrowing function of the region growing algorithm are developed to grow regions. The Main function selects the foreground pixels whose y_axis locations are between the maximum and minimum main lines, and sets these selected pixels as the seeds necessary for the growth of the foreground regions. Function Main() { x the set of foreground pixels, and let r = 0, If (the y_axis location of x is between the maximum and minimum main lines){ If (x does not belong to any labelled region){ Label the region number of x as r; RegionGrowing(x); r = r +1; The RegionGrowing function grows the desired regions. Let y h and y' h be the hue values of pixel y and y respectively, and y' s be the saturation value of pixel y. Symbol N y indicates the set of neighbours of y. The RegionGrowing function first selects a pixel y from the SSL_queue, whose region number is r y. All neighbouring pixels, whose properties are similar to those of pixel y, will be classified into the same region. Otherwise, the neighbouring pixels of y will be set as the boundary pixels of region r y. Here, T 1 and T 2 denote the thresholds used to check the pixel properties. Function RegionGrowing(x){ SSL_queue x ; While (SSL_queue is not empty) { Output a pixel y which belongs to region r y from the SSL_queue; y' N y ; If (y is not labelled){ D h yh y' h ; If ( D h T1 and y' the set of background pixels and y' s T2 ){ Label the region number of y as r y ; Update the average hue value of region r y ; RegionGrowing(y ); else label y as a boundary pixel of region r y ; Figure 7 illustrates the results of the region growing algorithm. The input frame is shown in Figure 7(a), and Figure 7(b) shows the distributions of the foreground pixels (yellow) and the background pixels (blue). The result of the region growing algorithm is shown in Figure 7(c). Notice that the foreground regions are successfully bounded by the background pixels. n k w(n k,n k ) n k n k w(n k,n k ) n k (a) (b) Figure 8. Two examples of adapting the link weights. (a) An example depicting an increase in the weight of the link. (b) An example depicting a decrease in the link weight.

2.5 Object Segmentation The system segments the objects by the process of region combination. Using a graph to represent an object, each region can be regarded as a node of the graph. If two regions are next to each other, then a link will be added to connect these two corresponding nodes in the graph. The weight of this link represents the strength of the connection between these two nodes. Let the two nodes be denoted by n k and n k respectively. The weight of the link can be defined as max( A( nk ), A( n' k )) w( nk, n' k ) 2L where A ( n k ) and A ( n' k ) represent the areas of nodes n k and n k respectively, and L indicates the distance between the centers of these two nodes. Here, we assume that the height of the seated student is not greater than twice the distance between two adjacent main lines. If the distance is greater than twice the distance between two adjacent main lines, then the weight is set to zero. Moreover, the system increases the weight of the link if the link connects two nodes which have the same neighbors. Figure 8(a) shows an example where the nodes n k and n k have two common neighbors, thus the weight of their link is assigned a value depending on the number of common neighbors. There is also a high probability that these two nodes belong to the same object. On the other hand, the system decreases the weight of the link if the link connects two nodes which do not have any common neighbors. Figure 8(b) shows an example where the nodes n k and n k contain no common neighbors, thus the weight of their link is decreased to a constant value. In addition, there is a low possibility that these two nodes belong to the same object. Figure 9 shows the results of object segmentation. Figure 9(a) shows the original input frame, Figure 9(b) shows the results of region growing, and Figure 9(c) shows the result of object segmentation. It can be seen that the system selects those objects with substantially large areas, while the smaller objects are ignored. 3 Gesture Recognition The system divides student gestures into six classes, which include raising the left hand, raising the right hand, raising two hands, standing up, lying prone, and normal posture. Figure 10 illustrates some examples of these six classes, which can be regarded as states and be constructed to form a finite state machine. We assume that the initial state of the finite automaton is the normal posture. Given an object i, the system can construct a feature t t vector F i at frame t, where F i = (a t i, p t i, m t i, d t i, g t i, c t i ). Here, a t i, p t i, and m t i indicate the area, the center position, and the motion of the object at frame t, respectively. Symbols d t i, g t t i, and c i are vectors which indicate the areas, the center positions, and the colors of the regions belonging to object i, respectively. A change in these feature values at successive frames can translate the finite automaton from one state to another. Thus, we define 14 rules (corresponding to transitions a to l) to translate the states. For example, if the center positions of some left regions of object i move upwards and are larger than a preset threshold value, and the area and center position of object i increase, then the student may be raising his/her left hand. This situation can cause a state transition from normal to raising the left hand through transition a, as shown in Figure10. 4. Experimental Results The input sequences for our system were acquired using a PTZ camera mounted on a platform and processed on a PC with an IntelRCore 2.186GHz CPU. The PTZ camera is set at a height of approximately 155 cm to 175 cm above the floor. The input video sequences are recorded at a rate of 30 frames/second. Figure 11 shows an example of the student is raising her hand. The first column illustrates three selected frames of this sequence, and the second column shows their corresponding region growing results. Their segmentation results are shown in the third column. In this case the state of the finite state machine will change from normal to raising the right hand through transition b. (a) (b) (c) Figure 9. The result of object segmentation.

Raising the left hand g a k f c Raising two hands j Normal i b h l Raising the right hand m d n e Standing up Lying prone Figure 10. A finite state machine consisting of student gestures. Figure 11. A sequence example in which the student is raising her right hand. Table 1 shows the experimental results of the student gesture recognition system. From this table, it can be observed that a total of 159 gestures can be recognized. Of these, 115 gestures were recognized correctly, therefore giving an accuracy of approximately 72%. The accuracy rate of lying prone is higher because this gesture is more independent compared to the other gestures, while the accuracy rate of standing up is lower

as the recognition result of this gesture is easily disturbed by others situated behind the student under observation. A total of 43 false positive gestures were also recorded. From the experiments, we conclude that the lighting is a very important factor affecting the accuracy rate of the gesture recognition system, especially the light change caused by the motion of the students. Table 1. The experimental results of the student gesture recognition system. Heading level Total no. of gestures Accuracy rate Correct recognition numbers False positive Raising the left hand 31 71% 22 8 Raising the right hand 33 73% 24 10 Raising two hands 32 72% 23 5 Lying prone 32 78% 25 2 Standing up 31 68% 21 18 Total 159 72% 115 43 5. Conclusions [2] K. Wan and H. Sawada, Dynamic Gesture Recognition Based on the Probabilistic Distribution of Arm Trajectory, Proceedings of International Conference on Mechatronics and Automation, Takamatsu, 2008, 426-431. [3] Y. Wang and G. Mori, Human Action Recognition by Semilatent Topic Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(10), 2009, 1762-1774. [4] J. Alon, V. Athitsos, Q. Yuan and S. Sclaroff,, A Unified Framework for Gesture Recognition and Spatiotemporal Gesture Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(9), 2009, 1685-1699. [5] P. T. Bao, N. T. Binh, and T. D. Khoa, A New Approach to Hand Tracking and Gesture Recognition by a New Feature Type and HMM, Proceedings of Sixth International Conference on Fuzzy Systems and Knowledge Discovery, Tianjin, 2009, 3-6. [6] Q. Chen, N. D. Georganas and E. M. Petriu, Hand Gesture Recognition Using Haar-Like Features and a Stochastic Context-Free Grammar, IEEE Transactions on Instrumentation and Measurement, 57(8), 2008, 1562-1571. In this paper, we proposed a lecture theatre based student gesture analysis system. The system first locates the main line and identifies the foreground pixels. Using the technique of region growing, the students can be segmented. Then, a finite state machine is used to recognize various student gestures. Currently, the accuracy rate of the proposed system is 72%. We believe that the accuracy rate can be increased with the incorporation of some prior knowledge into the system. This is a new attempt to apply image processing techniques to help the teacher to notice some behaviours of the students in classroom. We hope the system can be improved and practiced in the future. Moreover, since more than one student may change their gestures at the same time, using multiple PTZ cameras to detect multiple student gestures simultaneously may be a good area of research for the future. Acknowledgment The authors would like to thank the National Science Council of the Republic of China, Taiwan for financially supporting this research under Contract No. NSC 98-2221-E-003-014-MY2 and NSC 99-2631-S-003-002-. References [1] S. Mitra and T. Acharya, Gesture Recognition: A Survey, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 37(3) 2007, 311-324.