Model-based Real-Time Estimation of Building Occupancy During Emergency Egress

Similar documents
Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Overview. EECS 124, UC Berkeley, Spring 2008 Lecture 23: Localization and Mapping. Statistical Models

High-precision, consistent EKF-based visual-inertial odometry

Implementing a Hybrid Space Discretisation Within An Agent Based Evacuation Model

Removing Scale Biases and Ambiguity from 6DoF Monocular SLAM Using Inertial

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Extended target tracking using PHD filters

MobiCeil. Cost-Free Indoor Localizer for Office Buildings. Mohit Jain, Megha Nawhal, Saicharan Duppati, Sampath Dechu

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

The Impact of Current and Future Global Navigation Satellite Systems on Precise Carrier Phase Positioning

International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015)

Building strong relationships within a jurisdiction by utilizing Geographic Information

Probabilistic Robotics

An Adaptive Update-Rate Control of a Phased Array Radar for Efficient Usage of Tracking Tasks

Spatial Outlier Detection

Car tracking in tunnels

A Framework for A Graph- and Queuing System-Based Pedestrian Simulation

Proceedings of the 2017 Winter Simulation Conference W. K. V. Chan, A. D'Ambrogio, G. Zacharewicz, N. Mustafee, G. Wainer, and E. Page, eds.

PARALLEL AND DISTRIBUTED PLATFORM FOR PLUG-AND-PLAY AGENT-BASED SIMULATIONS. Wentong CAI

Data Association for SLAM

GNSS-aided INS for land vehicle positioning and navigation

Attack Resilient State Estimation for Vehicular Systems

Putting Occupants First

RoboCupRescue - Simulation League Team RescueRobots Freiburg (Germany)

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

EKF Localization and EKF SLAM incorporating prior information

Outline. EE793 Target Tracking: Lecture 2 Introduction to Target Tracking. Introduction to Target Tracking (TT) A Conventional TT System

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Artificial Intelligence for Robotics: A Brief Summary

Case Study for GV-Hot Swap NVR System V5: Multi-Story Building with Retail Stores, Offices and Parking Lot

Autonomous Mobile Robot Design

Sphero Lightning Lab Cheat Sheet

Data Assimilation for Agent-Based Simulation of Smart Environment

Uncertainties: Representation and Propagation & Line Extraction from Range data

PROGRAMA DE CURSO. Robotics, Sensing and Autonomous Systems. SCT Auxiliar. Personal

Navigation methods and systems

Super-Resolution from Image Sequences A Review

An Integrated Approach to Occupancy Modeling and Estimation in Commercial Buildings

This paper describes an analytical approach to the parametric analysis of target/decoy

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Structural and Syntactic Pattern Recognition

Robot Localization based on Geo-referenced Images and G raphic Methods

Detection and Tracking of Moving Objects Using 2.5D Motion Grids

AUTONOMOUS SYSTEMS. PROBABILISTIC LOCALIZATION Monte Carlo Localization

THE classical approach to multiple target tracking (MTT) is

COS Lecture 13 Autonomous Robot Navigation

How do people queue? A study of different queuing models. TGF 2015 Delft, 28th October 2015

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

IDS. Users Guide to Keypad Functions S E C U R I T Y MANUAL NO D ISSUED NOVEMBER 2002 VERSION 2.

A Neural Classifier for Anomaly Detection in Magnetic Motion Capture

Optimal Path Finding for Direction, Location and Time Dependent Costs, with Application to Vessel Routing

Range Sensors (time of flight) (1)

Automatic visual recognition for metro surveillance

Outline. Target Tracking: Lecture 1 Course Info + Introduction to TT. Course Info. Course Info. Course info Introduction to Target Tracking

5. Modelling and models. Sisi Zlatanova

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Sensor Tasking and Control

Basic Concepts And Future Directions Of Road Network Reliability Analysis

Sensitivity Analysis of Evacuation Simulations

Company Builds Sustainable, Highly Efficient Headquarters Facility

From Theory to Practice: Distributed Coverage Control Experiments with Groups of Robots

Computer Vision 2 Lecture 8

Principles of Wireless Sensor Networks. Fast-Lipschitz Optimization

Lecture 2 September 3

MOVEMENT DETECTOR WITH INBUILT CAMERA USER GUIDE

3D Model Acquisition by Tracking 2D Wireframes

BCPro System Product Bulletin

W4. Perception & Situation Awareness & Decision making

Building Occupancy Estimation using a Probabilistic Approach

British Columbia Building Code 2006 Division B Part 3 Fire Protection, Occupant Safety and Accessibility Section 3.4 Exits

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB

Self-calibration of a pair of stereo cameras in general position

EMERGENCY SUPPORT FUNCTION (ESF) 13 PUBLIC SAFETY AND SECURITY

Final Exam Practice Fall Semester, 2012

SCADA Software. 3.1 SCADA communication architectures SCADA system

Optimization Models in Fire Egress Analysis for Residential Buildings

Localization of Piecewise Curvilinearly Moving Targets Using Azimuth and Azimuth Rates

Best Practices for Incident Communications: Simplifying the Mass Notification Process for Government

Efficient Acquisition of Human Existence Priors from Motion Trajectories

MODELING OF BLASTING PROCESSES IN VIEW OF FUZZY RANDOMNESS

Cooperative Control for Coordination Of UAVs Utilizing an Information Metric

FUSION Multitarget-Multisensor Bias Estimation. T. Kirubarajan July Estimation, Tracking and Fusion Laboratory (ETFLab)

Identifying Layout Classes for Mathematical Symbols Using Layout Context

Geometrical Feature Extraction Using 2D Range Scanner

Implementation of Factorized Kalman Filter for Target Image Tracking

10. Network dimensioning

Video Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Convert Local Coordinate Systems to Standard Coordinate Systems

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms

State of Washington Public Building Mapping Minimum Software Standards Published by WASPC August 23, 2005 As Required by Chapter 36.28A.

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

A System to Automatically Index Genealogical Microfilm Titleboards Introduction Preprocessing Method Identification

Spatial Enhancement Definition

L10. PARTICLE FILTERING CONTINUED. NA568 Mobile Robotics: Methods & Algorithms

CS 4758 Robot Navigation Through Exit Sign Detection

ASSURING BUSINESS CONTINUITY THROUGH CONTROLLED DATA CENTER

Spatio-Temporal Stereo Disparity Integration

Puzzle games (like Rubik s cube) solver

Multisensor Data Fusion Using Two-Stage Analysis on Pairs of Plots Graphs

Transcription:

Model-based Real-Time Estimation of Building Occupancy During Emergency Egress Robert Tomastik 1, Satish Narayanan 2, Andrzej Banaszuk 3, and Sean Meyn 4 1 Pratt & Whitney 400 Main St., East Hartford, CT USA Contact address, e-mail: robert.tomastik@pw.utc.com 2 United Technologies Research Center 411 Silver Lane, East Hartford, CT USA Contact address, e-mail: NarayaS@utrc.utc.com 3 United Technologies Research Center 411 Silver Lane, East Hartford, CT USA Contact address, e-mail: BanaszA@utrc.utc.com 4 University of Illinois at Urbana-Champaign 1308 W. Main Street, Urbana, IL USA Contact address, e-mail: meyn@uiuc.edu Summary. This paper provides a viable and practical solution to the challenge of real-time estimation of the number of people in areas of a building, during an emergency egress situation. Such estimates would be extremely valuable to first responders to aid in egress management, search-and-rescue, and other emergency response tactics. The approach of this paper uses an extended Kalman filter, which combines sensor readings and a dynamic stochastic model of people movement. The approach is demonstrated using two types of sensors: video with real-time signal processing to detect number of people moving in each direction across a threshold such as an entrance/exit, and passive infra-red motion sensors that detect people occupancy within its field of view. The people movement model uses the key idea that each room has a high-density and low-density area, where high-density corresponds to a queue of people at a bottleneck exit doorway, and low-density represents un-constrained flow of people. Another key feature of the approach is that constraints on occupancy levels and people flow rates are used to improve the estimation accuracy. The approach is tested using a stochastic discrete-time simulation model of a 1500 square meter office building with occupancy up to 100 people, having a video camera at each of the three exits, and motion sensors in each of the 42 office rooms. The simulation includes stochastic models of video sensors having a probability of detection of 98%, and motion sensors with probability of detection of 80%. Averaged over 100 simulation runs and averaged over the evacuation time, the sensor-only approach produced a mean estimation error per room of 0.35 people, the Kalman filter with cameras only had a mean error of 0.14 people, and the Kalman filter with all sensors produced a mean error of 0.09 people. These results show that an effective combination of models and sensors greatly improves estimation accuracy compared to the state-of-the-art practice of using sensors only.

2 R. Tomastik, S. Narayanan, A. Banaszuk, and S. Meyn 1 Introduction This paper addresses the challenge of real-time estimation of occupancy in buildings or other critical infrastructure (such as subway stations, airports, and industrial facilities) during emergencies such as fire, hazardous gas releases, or chemical spills. Real-time estimates of occupancy can be provided to first responders (such as fire fighters and facility personnel) to accelerate search and rescue, for egress management, and to determine emergency response tactics (such as prioritization of resources for threat mitigation and suppression, and search). State-of-the-art agent-based models (ABM), which model and simulate individual occupant trajectories, are computationally expensive and unsuitable for real-time use. A reduced-order representation of the ABM is developed here and combined with building sensor data to provide fast and accurate estimates of occupancy during evacuation. The occupancy estimation problem is to determine the number of people in different areas of a building (in each room, zone, floor, etc.) along with an estimation variance to establish a confidence level. The information that can be provided at various spatial scales will be required for decisions made by an incident commander (located outside the facility or en route), or a responder in the facility. The mean and variance of the estimate must be provided in real-time, with a short delay and fast update rate (e.g. 5 seconds or less). The occupancy estimator can utilize all types of sensing devices in a building, including video cameras, motion sensors, and access control. For video cameras, signal processing algorithms are available to detect the number of people moving in each direction across a threshold such as an entrance/exit to a floor or room. The challenges of this problem are many. First, there are diverse modes of sensing available within a building, providing different types of measurement; for example, a video camera can detect the number of people in a room, whereas a passive infra-red motion sensor can only provide coarse occupancy information. Each sensor has a certain reliability and accuracy, where accuracy is typically specified by detection and false alarm rate. Second, not only are there diverse types of sensors, but the number of sensors in a building is large - typically 100 s to 1000 s. Thus, the volume of data to be processed in real-time is high. Third, despite the large number of sensors, there are many parts of a building which are not covered by sensors, making it difficult to measure occupancy in those areas. Finally, redundancy and distributed processing are highly desirable to ensure reliable estimation during a dynamically evolving emergency such as a fire. The approach here uses an extended Kalman filter to provide real-time occupancy estimates and is novel in the combination of sensor measurements and a model of how people move during emergency egress, resulting in vastly improved estimation accuracy when compared to sensors-only approach. The approach draws on previous work [1], and extends it along several dimensions including occupancy estimation at the room level, an improved model

Real-Time Estimation of Building Occupancy 3 of people movement capturing the dynamic of people queuing at an exit, and inclusion of states for flow of people between rooms, allowing direct incorporation of flow sensing into the Kalman filter framework. Also, a projection of the filter-generated estimates is performed to create an estimate that falls within the constraints of non-negative occupancy and people flow. Lastly, an improved approach for propagation of estimation variance is presented which takes into account the nonlinearity arising from state space constraints (bounds on flow through doorways and occupancy). 2 Problem Definition For clarity of presentation, we describe the estimation problem of a particular building, although the approach is generally applicable. The building has two floors, where we address the second floor. The floor plan is shown in Figure 1. During an emergency egress, occupants typically use the nearest stairs to reach the first floor, where they then exit the building. Occupants do not use the elevator for emergency egress. The floor has area of 1,500 square meters, and has up to 100 occupants during normal business hours. Fig. 1. Floor layout of example building The end-users of the occupancy estimates desire to view occupancy at various spatial scales. For a building of this size, the spatial scales are: floorlevel, zone-level, and room-level. Figure 2 shows the zone definitions that comprise the zone-level spatial scale. The building has a look-down digital video camera above each of the three stairwell exits to the first floor. The cameras provide live video to a real-time digital signal processing software, which counts the flow of people moving

4 R. Tomastik, S. Narayanan, A. Banaszuk, and S. Meyn Fig. 2. Floor layout showing zone definitions in each direction through each exit. These cameras provide a probability of detection of 98% and a false alarm rate of once every four hours. Also, each of the offices and conference rooms have a passive infra-red motion sensor. This Boolean-output sensor can detect if the room is occupied with 80% accuracy. The occupancy estimation problem is to determine the number of people in each room, each zone, and the total on the floor. Also, the estimate must have an estimation variance to establish a confidence level. The mean and variance of the estimate must be provided in real-time, with a short delay and fast update rate (e.g. 5 seconds or less). The occupancy estimator should utilize all sensing devices in the building. 3 Sensor-Only Estimator To assess the benefit of our combined model-sensor estimator, we developed a sensor-only estimator, and compare test results of both approaches. Commercial state-of-the-art products for estimating occupancy use video cameras mounted above (and pointing down) each exit/entry point to the building areas for which occupancy estimates are desired. These video cameras use digital signal processing to perform real-time count of people moving in each direction through each exit/entry. A running count is used for each area of the building, based on readings from the cameras that are at each entry/exit to each area. We use this approach, but extend it to include use of motion sensors, and to estimate occupancy in each room (and thus each zone). The heuristic algorithm consists of three basic steps that are performed at each time sample. These steps are: 1) The 3 people-flow cameras at the exits are used to track total occupancy on the floor, by simply adding/subtracting camera readings from a running total.

Real-Time Estimation of Building Occupancy 5 2) Room level occupancy estimates for rooms where motion sensors are reading unoccupied are set to zero. 3) For other rooms, the running total for the building is divided equally among the number of these rooms. 4 Model-Based Estimator for Building Egress Mode The estimation problem of this paper lends itself extremely well to using the extended Kalman filter (EKF) [2], which is a well-known and well-tested estimation method for combining sensor data with a model of dynamics (people movement, in this case). In this general approach, the dynamic model is represented as a non-linear dynamic stochastic state-space model: x(t + 1) = f(t, x(t)) + v(t) (1) where x is the vector of state variables (people occupancy and flow, to be described later), f is some non-linear function of time t and states x(t), and v(t) is process noise, representing the uncertainty in how people move in a building. The form of f for people traffic during emergency egress is presented later in this section. The extended Kalman filter also requires a sensor model: z(t) = h(t, x(t)) + w(t), (2) where z is the vector of sensor outputs, h is some function of t and state vector x, and w is sensor noise. In this paper, we extend our previous approach [1] in several significant ways: An improved people-movement model that captures the dynamics of people queuing at bottleneck doorways, using the results of [3] Use of multiple types of sensors, specifically video cameras plus passive infra-red motion sensors, and Improved handling of state-space constraints in the estimation algorithm. Each of the following sub-sections describe the above improvements. 4.1 People Movement Model The need for real-time estimation requires a people movement model that is computational efficient, and of course is as accurate as possible. The kinetic model of [3] fulfills these key requirements, and is used here. The key idea of the kinetic model is to model the high-density and low-density area of each room, where high-density corresponds to a queue of people at a door, and low-density represents un-constrained flow of people. In the Kalman filter, we

6 R. Tomastik, S. Narayanan, A. Banaszuk, and S. Meyn model as states for each room: the number of people in the high-density area of the room, the number of people in low-density area of the room, and the number of people moving into and out of each of these areas. Figure 3 shows a simple two-room example, where the states are thus defined as: x 1 No. of people in Area 1 x 2 No. of people moving from Area 1 to Area 2 x 3 No. of people in Area 2 x 4 No. of people moving from Area 2 to Area 3 x 5 No. of people in Area 3 x 6 No. of people moving from Area 3 to Area 4 x 7 No. of people in Area 4 x 8 No. of people exiting Area 4 Fig. 3. Two-room example used to explain state variables The state-space model f(x, t) for this simple example is: x 1 (t + 1) = x 1 (t) x 2 (t) (3) x 2 (t + 1) = α 1 x 1 (t) (4) x 3 (t + 1) = x 3 (t) + x 2 (t) x 4 (t) (5) x 4 (t + 1) = min(c 1, α 2 x 3 (t), A (x 4 (t) + x 5 (t) + x 7 (t) x 8 (t)) (6) x 5 (t + 1) = x 5 (t) + x 4 (t) x 6 (t) (7) x 6 (t + 1) = α 3 x 5 (t) (8) x 7 (t + 1) = x 7 (t) + x 6 (t) x 8 (t) (9) x 8 (t + 1) = min(c 2, α 4 x 7 (t)) (10) The parameters α i represent the rate of flow of people, and can be dependent on time and states (please see [3] for details). The parameters C i represent the maximum number of people that can move through each door

Real-Time Estimation of Building Occupancy 7 at each time step. The parameter A is the maximum occupancy of the leftmost room (areas 3 and 4), and the term A (x 4 (t) + x 5 (t) + x 7 (t) x 8 (t)) expresses the available space in the left-most room. The state space model also includes a noise term v(t). The Kalman filter algorithm uses a covariance matrix of this noise, where the dimension of the matrix corresponds to the number of states, and the matrix is diagonal. In the case of the people movement model, this covariance matrix represents the uncertainty in how people move from area to area. For states that represent occupancy levels (for example, in the two-room model, these are states 1, 3, 5, and 7), the diagonal term of the covariance matrix is zero since these is no uncertainty in these state space equations. For flow states, the diagonal term represents the uncertainty in flow. Specifically for flow states, using x 2 = α 1 x 1 (t) as an example, the term α 1 represents the rate at which the x 1 (t) number of people want to move to the area 2. We use the interpretation that α i is the probability that each person will move to the next area. As a result, the probability distribution function of α 1 x 1 (t) is a Poisson process with variance x 1 (t)α 1 (1 α 1 ). The variance is adjusted to account for constraints, as occur in the equations for x 4 and x 8. In the implementation of this approach, the Kalman filter uses the state estimate of x 1 in place of the actual state, which is unknown. 4.2 Sensor Models As described earlier in the problem statement, our estimation approach uses video cameras that count people flow, and uses motion detectors. The Kalman filter model h(x(t), t) is described in this sub-section, using the two-room example. For video cameras, assume that there are two video cameras one at each exit of each room. This video camera thus measures states x 4 and x 8 directly, with noise term based on the probability of detection and probability of false alarm. Thus: z 1 (t) = x 4 (t) (11) z 2 (t) = x 8 (t). (12) For motion sensors, assume that there is a motion sensor in each room. Thus, motion sensor 1 measures whether x 1 + x 3 >= 1 and motion sensor 2 measure whether x 5 + x 7 >= 1: { P1 if x z 3 (t) = 1 + x 3 1 0 if x 1 + x 3 < 1 { P2 if x z 4 (t) = 5 + x 7 1 0 if x 5 + x 7 < 1 (13) (14)

8 R. Tomastik, S. Narayanan, A. Banaszuk, and S. Meyn where P 1 is a parameter whose value is the typical number of people occupying the first room (and P 2 is likewise). The variance for the terms in w(t) for these motion sensors are based on the reading of the sensor. If the sensor is reading unoccopied, then the sensor is considered reliability in its value of z i = 0, and thus the variance term is based on the sensor probability of detection. If the sensor is reading occupied, then the value of its reading is considered very noisy because the sensor is not able to count the number of people in its range of detection; thus, the noise covariance is set very high (such as to the maximum flow). 4.3 Accounting for Constraints in the Estimate The Kalman filter can be interpreted as a recursive algorithm to compute the conditional mean (and in fact the entire conditional distribution) of occupancy based on sensor measurements. The filter is not designed to take into account hard constraints such as non-negativity of occupancy, upper bounds on occupancy, and bounds on the rate of occupancy flow. Let R denote a convex region to which the state process x(t) is known to evolve. At each time step, the Kalman filter computes estimation covariance matrix P (see [2] for details). For any vector x we define, [x] R = arg min y R { (y x) T P 1 (y x) } (15) This is the projection of x onto R in the weighted Euclidean norm, with weighting matrix specified by the inverse of the covariance matrix P. The new estimate is then determined by taking the Kalman filterproduced (unconstrained) estimate and projecting it onto R using the above equation. The reason for the non-standard projection is to ensure stability of the filter. In particular, in the ideal setting in which there is no state noise and x(t) is known to evolve in R for each time t, it can be shown that these estimates strictly out-perform those of the unmodified Kalman filter with respect to the weighted norm. This then establishes consistency of the algorithm [4]. 4.4 Accounting for Constraints in the Covariance Estimate As can be seen in the state space equations, the flow states can be constrained. These constraints can be factored into the predicted covariance calculated by the Kalman filter. These constraints have the effect of reducing uncertainty in the estimate, because the constraints act as an upper bound on the possible values of the estimates of the flow state variables, as shown in Figure 4. Figures 4a and 4b illustrate how the probability distribution function (PDF), which is generated based on the predicted mean estimate for a state variable and the covariance associated with the state variable, is modified based on flow constraint values. The covariance estimate for each flow state is modified to be the variance of the modified PDF, as in Figure 4B.

Real-Time Estimation of Building Occupancy 9 Fig. 4. Modification of pdf due to flow constraints. 5 Simulation Test Results The extended Kalman filter is tested using a stochastic discrete-time simulation model of people movement during egress, including stochastic simulation of sensor performance. The model is of the building shown in Fig. 1, and has average occupancy of 1.4 people per room at the start of the evacuation, which lasts about 100 seconds. The building has a video camera at each of the three exits, and each video camera can detect number of people crossing the exit in each direction. The accuracy of this measurement is 98%. Each of the 42 rooms has a motion sensor that detects if the room is occupied at 80% accuracy. The simulation was run for three different estimators: a sensor-only estimator, the extended Kalman filter using only the three cameras, and the extended Kalman filter using all cameras and motion sensors. Averaged over 100 simulation runs and averaged over the evacuation time, the sensor-only approach produced a mean estimation error per room of 0.35 people, the Kalman filter with cameras only had a mean error of 0.14 people, and the Kalman filter with all sensors produced a mean error of 0.09 people. Figure 5 shows a comparison between the actual occupancy in a building zone (comprising several rooms) and that from the model-based estimator, where all video cameras and motion sensors are used. Figure 6 shows a room-level result, and shows the impact of using motion sensors. 6 Conclusion In summary, the model-based estimation approach reduces error by an average of 74% compared to using sensors only. Another valuable conclusion is that even though motion sensors indicate only whether a room is occupied, use of such in-formation can still be of value, reducing estimation error by 36%. The computational requirements of the extended Kalman filter are such that the update rate of 5 seconds or less is easily met.

10 R. Tomastik, S. Narayanan, A. Banaszuk, and S. Meyn Fig. 5. Simulation test results showing zone-level occupancy estimates and actuals Fig. 6. Simulation test results showing a conference room occupancy estimates and actuals References 1. R. Tomastik, Y. Lin, and A. Banaszuk. Video-based estimation of building occupancy during emergency egress. In Proceedings of the American Control Conference, Seattle, 2008 (to be published). IEEE.

Real-Time Estimation of Building Occupancy 11 2. Y. Bar-Shalom, X. Rong Li, and T. Kirubarajan. Estimation with Applications to Tracking and Navigation. Wiley-Interscience, 2001. 3. S. Burlatsky, V. Atrazhev, N. Erikhman, and S. Narayanan. A novel kinetic model to simulate evacuation dynamics. In Proceedings of the 4th International Conference on Pedestrian and Evacuation Dynamics, Germany, 2008 (to be published). Springer. 4. T.-L. Chia. Parameter identification and state estimation of constrained systems. PhD thesis, Case Western Reserve University, 1985.