Path Recovery of a Disappearing Target in a Large Network of Cameras using Particle Filter

Size: px
Start display at page:

Download "Path Recovery of a Disappearing Target in a Large Network of Cameras using Particle Filter"

Transcription

1 The Interdisciplinary Center, Herzliya Efi Arazi School of Computer Science Path Recovery of a Disappearing Target in a Large Network of Cameras using Particle Filter M.Sc. dissertation for research project Submitted by Amir Lev-Tov Under the supervision of Dr. Yael Moses February 2010

2 Acknowledgments I would like to express my appreciation to my supervisor, Dr. Yael Moses. First, for believing in me and suggesting me the possibility of research. Secondly, for her willing to listen to my ideas and share opinions, while keeping me on the right track and making this research possible. Her positive way of thinking, making things look better and the willing to help at any time, have been very important to me. Her endless dedication and patience during the two years I have worked on this thesis have made our research a great pleasure. I would also like to thank my friends from the laboratory, who made the time I had been working on this research, very enjoyable. This research was supported by the Israel Science Foundation (grant no.1339/05). i

3 Abstract A large network of cameras is necessary for covering large areas in surveillance applications. In such systems, gaps between the fields of view of different cameras are often unavoidable. In this paper we address the problem of path recovery of a single target in such a system, where objects can be out of sight for a long period of time. We assume that the spatio-temporal topology of the network is known, and that an available tracker produces an object identity that might be unreliable (e.g., unreliable object appearance). The task difficulty depends on the spatio-temporal topology, as well as the possibility to confuse other objects that moves around. We propose a function for measuring this confusion. Our tracking method consists of two phases. The first phase produces possible solutions for the location of the target. It is an efficient new approach that is based on a modified particle filtering framework. The tracking is performed in a state space that consists of object locations and identities. Invisible locations are explicitly modeled by the states. Hence, the detection of targets disappearing and re-appearing is inherent in the algorithm. The second phase computes the object path by applying a shortest path algorithm to the results of the first phase. We tested our tracking approach on a system with hundreds of cameras and thousands of moving objects, and obtained good results. This is perhaps the first solution for this problem that is effective, robust and scalable to large networks of cameras. The results, as expected, vary for different network topologies and the possible confusion between objects. ii

4 Table of Contents Acknowledgments Abstract Table of Contents List of Figures List of Algorithms i ii iii v vi List of Equations vii 1 Introduction 1 2 Related Work 5 3 Particle Filters 8 4 Problem Formulation and Notations Camera Network Topology Observations Object State The Method First Phase: Particle Filter Second Phase: Path Reconstruction Complexity & Efficiency Complexity Distributed Computation iii

5 TABLE OF CONTENTS 7 Confusion Measure 23 8 Experiments Generating the Simulated Data Results Basic Results Number of Particles and Comparison of Phases Scene Confusion Values and k-best Gap Modeling Varying the Number of Time Steps Summary and Future Work 36 Appendix: Algorithm Summary 39 A.1 First Phase: Particle Filter A.2 Second Phase: Path Reconstruction Bibliography 47 iv

6 List of Figures 5.1 Layered graph example Similarity distributions of target and observations DB Grid and street camera network topologies Example of a tracking results on a grid topology Score as a function of Number of particles and different topologies Score as a function of k-best potential solution Score as a function of number of time steps Example of a tracking results on a grid topology with 50 time steps Example of a tracking results on a grid topology with 100 time steps v

7 List of Algorithms 9.1 Tracking on a graph of cameras First phase - particle filter Prediction Evaluation of particles Diffusion Second phase - path reconstruction Define Weights vi

8 List of Equations 3.1 General particle filter update General particle filter prediction Mahalanobis distance Similarity function between feature vectors Hidden states definition The state space Particle evaluation given an observation and the target Particle evaluation given an observation Definition of the observation model Final particle weight Particle update Normalization First phase output Bayes filter on a graph Definition of a sample space for the confusion measure Probability of error for the confusion measure Measure function for sets in the sample space Definition of the confusion measure Generative model of simulated data Locations dispersal estimation vii

9 Chapter 1 Introduction A large network of cameras is often necessary for covering a large area in surveillance applications. Such systems typically have gaps between the fields of view (FOVs) of their cameras, and objects can therefore disappear from view for a relatively long periods of time. A major challenge in such a system is to efficiently track an object that moves between different FOVs, while many other objects that may be similar to the target are moving in the scene. We address the problem of path recovery of a single target in a large scale network of cameras with relatively large gaps between their FOVs and with many other moving objects that are similar to the target. Our solution is efficient and scalable. We consider two applications: One application is an online algorithm for handover tracking of a target between cameras, i.e. to track the path of a target across a network of cameras. Another application is an offline processing of the network s videos in order to search for a specific target. This processing can take a very long time consuming a huge amount of resources. With our solution only a small amount of video data are required to be processed and the overall path of the target is computed. In addition, other candidate paths which are less probable are suggested. We assume that the spatial and temporal topology of the camera network is given or can be computed (e.g., Makris et al. [16], Farrell and Davis [5], Stauffer [20]). Moreover, for simplicity, we assume that the cameras have disjoint FOVs. Otherwise, a stitching techniques can be applied to obtain a single FOV (e.g., Szeliski [21]). Couple of factors affect the difficulty of our task. These include the 1

10 Chapter 1 / Introduction number of cameras in the network, the degree of the network s nodes, the size of the gaps between the cameras FOVs, and the transition probabilities between the cameras. In addition to the assumed available topology, we assume that the identities of objects in each camera s FOV can be computed, and a similarity function between two object identities is given. There are numerous tracking algorithms that can produce such identity data (e.g., Zhao et al. [24]). The identity computation and similarity function are considered to be black boxes; the better they are the easier the task is. We suggest a measure for evaluating the identity distinctiveness. This measure is used to evaluate how difficult it is to distinguish between a target and it s surroundings in a scene, based on their appearances. Our method consists of two phases. The first is a probabilistic algorithm and the second is a deterministic one. The probabilistic phase is an online multi-hypothesis tracker that produce multiple results. It can be regarded as a modified particle filter (PF). Equipped with spatio-temporal topology information, it explicitly models the expected gaps between cameras FOVs. This is in contrast to classical PF algorithms that overcome short time occlusions only through system robustness to errors and noise (e.g., Isard and Blake [8], Kim and Davis [13]). We propose a new state space that consists of locations and object identities, where the locations include hidden locations together with temporal information. Because tracking is performed in this new state space, the detection of disappearing and reappearing objects is inherent in the algorithm. For each time step, our probabilistic algorithm produces a small set of candidate locations and object identities. Using this outcome reduces the required computation for the second phase. The second phase is a deterministic shortest path algorithm which enforce path constraints and reconstruct the best path. Having specified a multi hypotheses tracker for the first phase, the shortest path algorithm we used in the second phase enables us to maintain multiple hypotheses for the desired path, and thus, to efficiently track the correct path of the target. In particular, it enables us at each time step, to choose the best path according to the information gathered up to that point in time. Our method is quite efficient, and can be further improved by a distributed implementation. Complexity issues and distributed implementation are described extensively in Chapter 6. A modified PF framework is chosen for the first phase of our algorithm due to the expected noise in our observation model, and the non-linearity of the process. For this reason Kalman filter approach is inadequate. Our observation model is not Gaussian-noise, because of within-camera occlusions and 2

11 Chapter 1 / Introduction possible confusion between objects. The process is not linear, because the path of a target over the network is arbitrary. In addition, the PF framework results in multi hypotheses, property which proved to be vital for this kind of problem. Our motivation is to test a tracking algorithm in an extremely large network consisting of hundreds of cameras and thousands of observations (as in a big city). Moreover, our method is probabilistic and the state space formed from a small size network is trivial for this kind of algorithm. Thus, a large network of cameras would be required to demonstrate its power and advantages. Existing studies for path recovery test their algorithms on a few cameras with relatively few objects in motion. For example, Cai et al. [2] use 3 cameras, Song and Roy-Chowdhury [19] use 20 cameras and 10 people, Van de Camp et al. [22] use 6 cameras and 5 people, and Kim et al. [12] use 5 cameras and 10 persons. Since a network of hundreds of cameras is a complex setup and acquiring thousands of annotated observations is currently unavailable, we chose to evaluate the system on simulated data sets. Each data set contains topology with hundreds of cameras, in which thousands of objects are moving. An object s identity in each of the cameras is represented by an identity vector. The identity vectors are computed by the tracking method and contain the object s descriptions. The identity of a given object may vary in different cameras. As mentioned before, the identity vectors are not reliable and may be confused with other objects identity vectors. A collection of such simulated data sets were used to test our method. The datasets have different confusion rates and spatio-temporal topology. Our results demonstrate the efficiency of our probabilistic algorithm and its scalability. In addition, the results show the advantages of directly modeling the gaps between cameras. Furthermore, the necessity for a second phase demonstrates that the probabilistic algorithm alone is insufficient by itself, and the second phase is required for obtaining a more reliable solution. The main contributions of this thesis are: An efficient multi hypotheses algorithm for tracking in a large network of cameras which is scalable with the network s size. It can also be implemented in a distributed framework. Application of PF algorithm on a new and different state space than has been used before. 3

12 Chapter 1 / Introduction Direct modeling of gaps between cameras s FOV as part of the system s states. Cooperation of multi hypotheses PF results with an online second phase to enforce long term logic, and thus generating multi hypothesis path tracker. Definitions of a confusion rate function for measuring the difficulty of tracking. The rest of the thesis is organized as follows: In Chapter 2 we describe the related work. in Chapter 3 we review the PF fundamentals. We define our formal terminology in Chapter 4 and our method in Chapter 5. A discussion on complexity issues and distributed implementation is presented in Chapter 6. The confusion measure we developed is defined in Chapter 7 and experimental results are presented in Chapter 8. Finally, we conclude and suggest future work in Chapter 9. A formal pseudo-code of the algorithm is given in the appendix. 4

13 Chapter 2 Related Work Particle-Filters were developed as sequential Monte-Carlo algorithms for Bayesian state estimation (Gordon et al. [6], Arulampalam et al. [1], Isard and Blake [8]). Much progress has been done in the field of tracking since then, but so far all of them were applied only on continuous state spaces, or discretized versions of them. Among those state spaces in computer vision we can mention the image plane and the ground plane. Our method apply a PF framework on a discrete state space of cameras and identities, which is not assumed to have neither additive operation nor order relation. Due to these properties, it is impossible to compute the mean state or the median state. In particular, we assume the projection of the cameras FOVs to a common ground plane is not available. As a result, classical steps of the PF should be redefined. The multi-camera tracking problem can be categorized by the existence of overlapping FOVs of the cameras. When the FOVs consist of overlapping regions, tracking objects can be regarded as an extension of a single camera tracking. The overlapping regions can be used for tracking and handover (e.g., Lien and Huang [15], Qu et al. [17], Guler et al. [7]). Another setting that can be regarded as an extension of a single camera tracking is when the objects can be tracked on a common ground plane (Kim and Davis [13], Chilgunde et al. [3], Du and Piater [4], Rahimi et al. [18]). In particular, Kim and Davis [13], Du and Piater [4] used a PF for tracking. Leoputra et al. [14] used map information to determine gaps between FOVs of cameras. Methods that address the problem of tracking in a network of cameras with non-overlapping FOVs 5

14 Chapter 2 / Related Work often make use of the spatial or temporal topology of the network. Spatial topology refers to the possible transition of an object between cameras FOVs, or the probability of such a transition, whereas temporal topology refers to the time delays of such transitions. Song and Roy-Chowdhury [19], Javed et al. [9] and Kim et al. [12] used both spatial and temporal information, Zajdel et al. [23] used only spatial information and Cai et al. [2] used only temporal information. The latter produce the temporal information in a training phase which learns the transitions time-delays of objects moving between disjoint FOVs. The representation of the tracking problem as path recovery in a graph, which is related to the second phase of our algorithm, was studied by Kettnaker and Zabih [11], Javed et al. [9], Song and Roy-Chowdhury [19] and Kim et al. [12]. In these studies, each node represents an observation by one of the cameras at a given time. The edges and their weights represent the likelihood that two nodes are successive observations of the same object. The main difference between these methods are the likelihood computation as well as the optimization used. The likelihood may depend on the similarity of the two observations, and often the spatial and temporal information about the camera topology. The multi-path recovery is then solved by an optimization solution on this graph. Let us extend about the most related methods. Song and Roy-Chowdhury [19] address the multitarget problem, i.e. to find the best path for each target moving in the cameras network. They used a network graph where each node represents exit or entry region in the camera s FOVs. The spatial topology information is represented by binary values that represent adjacency of nodes, i.e. that a direct transition between them is possible. In their solution an additional graph, the feature graph, is defined, where each node represents observation in a camera at a certain time step. The edges are determined according to the camera network topology, and a weighting score is calculated as a similarity function between nodes. For the calculation of similarities, they used temporal information in the form of probability density functions (pdf ) for time delays between features in adjacent nodes, as well as appearance and identity similarity. The computation of optimal paths is then applied on the feature graph by finding a maximum matching in a bipartite graph. In their work they measure the consistency of the chosen path of an object and adaptively change the weighting score and uncertainty information between nodes, according to that measure. 6

15 Chapter 2 / Related Work Leoputra et al. [14] employ a particle filter for tracking objects. Prior information about regions that are not covered by the cameras FOV is used. This information is given by a map of the scene that serves as a ground plane map with annotation of non covered regions. The particles are initialized to reside in all of the regions and their movement is predicted to the neighboring regions uniformly. Moreover, the weight of the particles in hidden regions is changed only according to the time elapsed since they entered a visible region. Thus, their clustered sampling serves only as a mean for associating past observations to current ones and not as a propagation of state density. Zajdel et al. [23] propose a deterministic approach for handling the multiple target tracking in a multi camera network. They use spatial information about the scene, but only in the form of binary relations between cameras. For online tracking of objects they used a filtering approach as a version of Bayes filter. They describe a graphical model for modeling the connections between labels assigned to persons, observations, and persons appearance model. On this graph they infer Bayesian dependence across those variables and try to solve the data association for the multi-target problem. Because of intractability of the problem, approximated Bayesian inference is applied and a tractable density for the posterior computation is used. In terms of complexity, note that the above three algorithms require to perform tracking in all cameras, and the solution is polynomial in the number of edges. These methods are not scalable for a large number of cameras (in our case thousands). The first phase of our method reduces this complexity, and only a small number of observations are considered for finding the optimal path. The choice of weights in the second phase of our algorithm depends on the results of the first phase, and this is the main difference between the second phase of our method and the multi-path optimal recovery method in Song and Roy-Chowdhury [19]. 7

16 Chapter 3 Particle Filters In this section we present a brief review of the PF framework for propagating state density over time. we then present a high level description of the modification required for our method. Consider a general dynamic system. Let X be a state of the system, e.g., a location of an object. Suppose we are given an observation Z which depends on the state X of the system. assume also that the system s state and the observation are changing with time. PF aim to estimate the state of the system over time efficiently. Practically, it is used mainly for tracking and monitoring. Its advantages are in handling non-linear process dynamics or when the noise model is not Gaussian. With these conditions, the Kalman filter cannot be applied as a closed form optimal solution. In PF, The state density is represented by a set of particles X t = {x t } with weights {w(x t )}, namely, a probability mass function. Each particle is in fact a hypothesis of the system s state. PF efficiently approximates Bayes filter and can model general distributions. The larger the number of particles, the better approximation obtained for the Bayes filter. The particles distribution approximates the state density P (X t Z t ). X t is a random variable representing the current state (e.g., the current location of a tracked object), and Z t represents the current observation. The computation is performed using the prior P (X t 1 Z t 1 ) from the previous time step followed by a prediction process that yields a predicted prior for the current time step. The particles representing this prior are weighted by the current observation density, P (Z t X t ), and the new distribution is the approximation for the new prior P (X t Z t ). Ignoring the constant observation prior P (Z t ), 8

17 Chapter 3 / Particle Filters it follows formally from Bayes rule: P (X t Z t ) P (Z t X t )P (X t Z t 1 ), (3.1) The last term can be computed by the dynamic model P (X t X t 1 ) with integration over X t 1. All computations are given the previous states X 1..t 1 and Z 1..t 1 but depend only on the last state, using Markovian assumptions. P (X t Z t 1 ) = P (X t X t 1 )P (X t 1 Z t 1 )dx t 1. (3.2) This iterative process requires a prior at time t = 0, P (X 0 ), which is the initialization. One of the advantages of the PF method when used in tracking is its robustness to errors and occlusions. The hypotheses may be spread over a large part of the state space; therefore, even if the object is temporarily not detected, it might be tracked in one of the successive steps. The steps described by a PF algorithm consist of a prediction of particles states, an evaluation of them based on measurements, computation of output state according to the new density, a random resampling and a diffusion process for modifying the new set of states. Our method reimplements the steps used in the PF to operate in a new state space. This state space lacks additive operation and has no order relation. Thus, a moment cannot be calculated from the resulting density as an output. Instead, the maximum likelihood hypothesis is chosen as an output. The diffusion and prediction steps are not implemented at each time step by an additive operation as in the classic PF, and our resampling procedure distinguishes between two sets of particles and resamples from one of them. Our weighting step evaluates only some of the particles according to the observations, but normalizes all of them. 9

18 Chapter 4 Problem Formulation and Notations Here we define our formal terminology used to describe the method. We begin with the camera network topology formulation, continue with the observations definition and finally we define the formal state of our system. 4.1 Camera Network Topology We represent the topology of our cameras network as a weighted directed graph G = C, E. The nodes C = {C i } n i=1 represent the cameras, and the edges, E, represent adjacent cameras (direct transitions between the cameras, without passing through other cameras FOVs). Self edges represent that a target may remain in the same camera s FOV. We define the weight function W p (C i, C j ) over the set of edges to be the probability of transition between the cameras C i and C j. The edge weight captures the a priori knowledge of the system regarding the behavior of targets in choosing their path. For each node C i, the sum of outgoing edge weights, {W p (C i, C j )} n j=1, is 1. We assume that the transition of a target from one camera to another is independent of its previous location. Therefore, G mc = C, E, W p can be regarded as a Markov chain representing the object locations over the tracked path. This is not an accurate model of reality, but it is a sufficient approximation. Higher order transition models can be employed as a more accurate prediction. In our system we assume that the cameras may have large gaps between their FOVs and the time 10

19 Chapter 4 / Problem Formulation and Notations required for a target to move between different cameras can vary. Let T i,j : R + [0, 1] be a time delay pdf between adjacent cameras C i and C j. We assume that the set of pdf s of all pairs of adjacent cameras is given to the system. Such information can be obtained from other studies such as of Farrell and Davis [5]. In particular, modeling U-turns is performed by a reflexive edge with corresponding time density function. In our algorithm we consider time as a sequence of discrete segments. Together, T i,j and W p form the dynamic model of the system. 4.2 Observations Under the method assumption, the tracking result of each camera provides a set of identity features for each of the tracked objects in its FOV. The identity features can be used by another camera to identify the object and may include color histogram, gait, velocity, height, gender, and so forth. Without loss of generality, it can be assumed that the identity features are summarized by a vector f R d. Denote by F t the set of all identity vectors observed by the system at a given time t, and by F = T t=1 F t the set of identity vectors in overall time of tracking. A function for comparing identity features, d(f 1, f 2 ), is assumed to be given. Here we take it to be the Mahalanobis distance d M. For computing the Mahalanobis distance, we assume that the covariance matrix, Σ, of a typical identity vector is known. In this case, d M (f 1, f 2 ) = (f 1 f 2 )Σ 1 (f 1 f 2 ) T. (4.1) The distance is used to define the probability P s of two identity vectors, f 1 and f 2, to represent the same object. In our experiments it is defined by: P s (f 1, f 2 ) = e 1 2 d2 M (f 1,f 2 ). (4.2) 11

20 Chapter 4 / Problem Formulation and Notations 4.3 Object State An object in our system may be visible by one of the cameras or it may be hidden. The location and appearance of an object in the system is modeled by a state, s = (l, f), where f F is the identity vector, and f is the location of the object. For a visible object l C and for a hidden one, l H. That is, the object location is given by l L = C H. Each hidden location contains the object s destination camera, C i, as well as its expected time of arrival, ta. The ta has discrete values between 1 and the largest possible time required to arrive a camera. The range of the ta is determined by the time delay functions T i,j. In particular, the maximal time of arrival to a given camera, C i, is given by i. Formally, H is defined by: H = {(C i, ta) C i C, 1 ta i )}, (4.3) Note that the identity vector may vary over time, and it depends on the object s location. When the object is not visible, we consider the last visible identity vector. The set of particle states and observation states at time t are denoted by Z t and X t, respectively. 12

21 Chapter 5 The Method In this section we present our two phase method. The formal algorithm including its pseudo code can be found in the appendix. Given the initial state of the target, x 0 = (C 0, f 0 ), the task is to recover the sequence of states that best represents the path of the target in the network over time. The initial state can be determined manually or automatically, depending on the application. Our method recovers, at each time step, the target s location (visible or hidden), and the identity along its path. The first phase of the method is a probabilistic online algorithm that selects a set of the k most probable states at each time step (kbest). Using the k-best states, the second phase recovers the full path of the target using a deterministic algorithm, by holding at each time step a set of paths. Note that the second phase requires only the last step of the first phase. Hence, our algorithm is online. 5.1 First Phase: Particle Filter A set of N state hypotheses (so called particles) are spread in the state space S = {(l, f) : l L, f F }, (5.1) modeling our belief about the current target s state (location and identity). As in the classic PF algo- 13

22 Chapter 5 / The Method rithm, propagating this belief in S, using a dynamic model and an observation model, provides certainty information for the target s state at each time step. Given the initial state, x 0 = (C 0, f 0 ), the set of particles are initialized to reside at the target s origin camera C 0, where each particle possesses the target identity vector f 0, and is equally weighted. Prediction: At each time step a prediction is applied to the set of particles {x t }. The prediction models the expected state of a particle in the next step, x t P (X t X t 1 ) (i.e. x t is distributed according to the density P (X t X t 1 )). In our algorithm, while the identity remains the same, a new location is determined by the given Markov chain, G mc, and {T i,j } which form the dynamic model of moving targets in the world. Here we use G mc as a state machine with a single random step from the corresponding camera state. After a particle is chosen to be en route to a new camera, time of arrival ta is sampled from the respective time-delay pdf, T i,j. No gap is indicated by ta = 1, whereas ta > 1 indicates that the target has entered a hidden state. In this case, in addition to the identity vector and the destination camera, the state consists of the relevant time of arrival, ta. Prediction of the state of a hidden particle is performed by decreasing the ta by one. If ta = 1, it enters a non-hidden state, according to the destination camera. Suggested extension of the prediction step is to use the observed trajectories in order to predict the corresponding particles with a better prior than with static distributions of G mc. This is future discussed in 9. Another possible modification of the prediction method is to predict the identity vector. For example, known photometric transformations between cameras may be used to modify the expected histogram of an object s appearance (e.g., Javed et al. [10]) Observation: The observation is a set of system states Z t = {z t }, which are measured at the relevant time step. A tracker, assumed to be given, produces the set of identity vectors F t from tracked objects. Clearly, no observation can produce a hidden state. Hence, the set of observations is given by Z t C F t. Evaluation: The hypotheses are evaluated according to the set of observations Z t = {z t }. We therefore wish to compute P (x t = z t f 0 ). In our implementation, the location component of the state overrules the identity component when two states are compared. That is, C xt C zt implies 14

23 Chapter 5 / The Method P (x t = z t f 0 ) = 0. Moreover, a given observation z t = (C z, f z ) supports a particle x t = (C x, f x ), if their identity vector components are similar. In addition, the identity vector of the observation is required to be similar to that of the target in its initial state, f 0. Therefore, we define: P (x t = z t f 0 ) = βp h (x t = z t ) + (1 β)p s (f 0, f zt ), (5.2) where 0 < β < 1 and P h is a similarity measure that we define next. Given C xt = C zt, the similarity P h between a particle and an observation is dependent also on the transition probability from the particle s previous camera, C xt 1. That is: P h (x t = z t ) = αw p (C xt 1, C xt ) + (1 α)p s (f xt, f zt ), (5.3) where 0 < α < 1 (we used α, β = 0.5) and P s is the probability of two identity vectors to represent. the same object (e.g., (4.2)). W p is the transition probability function given by the cameras network topology. In PF terms, the observation model is defined as P (Z t X t = x t ) = w(x t ), (5.4) where w(x t ) is the weight of the particle. To assign a weight to a visible particle, we compute the maximum probability over all observations: w(x t ) = max z t Z t P (x t = z t f 0 ). (5.5) 15

24 Chapter 5 / The Method Finally, we update with probability P update a particle x t with the associated observation: z t = arg max z t Z t P (x t = z t f 0 ). (5.6) In our experiments we used P update = 0.5. As a result, our current knowledge is updated, as well as hypotheses that are not real observations are maintained. The latter result is useful for modeling the case of within-camera occlusion. In that case, the hypothesis of the occluded object is the correct hypothesis, but the object s observation is not available. Note that it is sufficient to evaluate the particles only in the set of cameras defined by the location component of the set of particles. In addition, the identity vectors are compared only between particles and observations in the same camera. Hence, there is no need to perform tracking in all of the cameras. Consequently, the complexity of our algorithm is significantly reduced, since both tracking and vector comparisons can be computationally expensive. For efficiency, the evaluation is performed only in the N fc most frequent particles cameras. In our experiments, we set N fc = 30. In practice, we got an average of 20 distinct cameras that the particles visit in a typical time step, independent of network size. Normalization: Hidden particles cannot be supported directly by observation. However, the weight of the visible particles may reflect the weight of hidden state particles. For example, when none of the visible particles has high support from the observations, the hypothesis that the particle is hidden is more likely. On the other hand, a high likelihood of visible particles decreases the likelihood of hidden ones. In our implementation, this is brought about by normalizing the weights of all particles. The initial weight of a hidden particle is taken to be its weight when it entered the hidden state. Then, the weights of all particles, are normalized to sum up to 1: ŵ(x t ) = w(x t ) x X t w(x). (5.7) 16

25 Chapter 5 / The Method Output: In the classic PF, the set of particles represents the pdf of the tracked state in a continuous space. Therefore, the location at a given time is chosen by, for example, the expectation. In our case, neither order relation between the states exists, nor an additive operation is defined, and thus no moment can be used. We therefore use the normalized weight of the particles to rank the k distinct most probable particles, {x i t} k i=1. For a hidden particle, the ith output is the particle itself, x i t. For a visible particle, the most similar observation is chosen: Out i t = arg max z t Z t P (x i t = z t f 0 ), (5.8) The k-best distinct states (i.e. with the highest weights), are the input to the second phase of our algorithm, as described in 5.2. Resample and Diffusion: Our current distribution of particles models our belief P (X t Z t ) about the target s state. A resampling procedure is applied to the set of particles, generating a new subset of hypotheses. This is done by a factored sampling as in a classic PF, according to the particles weights. Similar random sampling is used also in the diffusion and prediction steps. This step causes particles with high weights to be duplicated, thus, increasing the probability to correctly detect the next state solution in their region. On the other hand, particles with low weights are more likely to be discarded. Another outcome of the resampling step is a generation of duplicated states. To overcome this and to explore more states, a diffusion procedure is applied. It is the analogous of adding noise in the classic PF but has to be redefined since our state space, S, lacks an additive operation. In order to diffuse particles in S, we need to add noise in its state coordinates. 1. To apply noise to the location state, we use a prediction step, using a Markov chain denoted by G mcd = V, E, W d instead of G mc. The G mcd is similar to G mc but has higher probabilities for staying at the same camera. Thus, most of the particles do not change their location state in this step, whereas part of them do. For example, we can define the reflexive transition probabilities of G mcd to be as twice as of G mc s, and update the rest by normalization. 17

26 Chapter 5 / The Method 2. To apply noise to the observation identity component of the state, a random choice with probability P switch defines whether to keep the identity or change it. (P switch is defined as a system parameter and in our experiments was chosen to be 0.5). In case of a switch, an observation from the particle s camera observations is randomly chosen according to their similarities to f 0. Note that P update described in the evaluation step determine the association of the new best observation to the particle, whereas P switch determine whether a given association is changed. At this point a new iteration of the algorithm begins, in which a prediction step is applied on the new generation of particles. 5.2 Second Phase: Path Reconstruction The outcome of the first phase is, for each time step, a set of k most likely states prioritized from 1 to k. The correct answer is not necessarily the most probable state, because other states (which the correct one is among them) may have similar weight. This is due to a difficult scene, where many objects are alike, and an unavailable perfect identity vectors representation or a similarity function. The goal of the second phase is to choose the best path given the sets of k-best states obtained for each time step. We define a layered graph G, where the nodes in layer t consist of the k-best states of the t time step. An edge between two nodes in successive layers reflects a legal transition from the corresponding states, defined by the camera topology. In particular, it depends on the adjacency of its cameras nodes and their time values, in case of hidden locations. The weight of an edge is the priority of the state it is directed to. Using this formulation, the best path can be computed using an online shortest path algorithm in a layered graph. Note that two layers may be disconnected due to errors. In this case, the highest prioritized node from the next level is chosen. This choice allows us to avoid poor results in case of disconnection while maintaining the benefits of the reconstruction as a method. This approach has the advantage of being online, and is operated efficiently only on the last data gathered from the PF in each time step. Implemented by dynamic programming, we practically maintain and update k paths at all time steps, each one functions as a hypothesis for the constructed path 18

27 Chapter 5 / The Method and the shortest one is chosen. Therefore, we effectively get multi hypotheses path tracker. Figure 5.1: Example of a layered graph construction in the second phase. The trajectories on the left are the shortest paths to the corresponding states until time step t. The nodes in each layer are the prioritized states from the first phase. The weight of an edge is the priority of the state it is directed to. 19

28 Chapter 6 Complexity & Efficiency In this chapter we extend about complexity issues. We first present the formal complexity of our algorithm. Afterwards we compare it to the complexity of the deterministic full search approach. We do so by showing an equivalent computation of a Bayes filter on a full graph of observations. Finally, we discuss a distributed computation of our algorithm and its complexity. 6.1 Complexity Our Algorithm Complexity: One of the main challenges of the path recovery problem in a large network of cameras is efficiency. The efficiency of our method depends on the number of particles used, N. This number is predefined and depends on the difficulty of the scene. The difficulty of the scene depends on the similarities of objects to the target (confusion measure is defined in Chapter 7), the network topology, its average connectivity degree and the number of observations. The tracking is performed on a predefined fixed number of cameras at each time step. The number of comparisons is linear in the number of time steps, assuming a constant average number of observations at each camera and a constant number of particles. In the worst case, the number of comparisons is given by O( ZNT ), where Z is the average number of observations at each camera, and T is the overall number of time steps. In our experiments Z was 4 and N varied from 80 to 250. Note that the number of basic computations performed by the second 20

29 Chapter 6 / Complexity & Efficiency phase in each time step is an additive negligible constant of k 2. Naive Algorithm Complexity: For comparisons, we present the efficiency of a deterministic full Bayes filter on the complete state space. We consider a simple model without hidden states. Moreover, one of the observations is always the correct answer, i.e., no occlusions. We construct a graph G F ull similar to the one described in 5.2. Each layer consists of Z nodes V t = {v t } that represent the observations from the corresponding time step. Two successive layers are fully connected with weighted edges that are defined by the system s dynamic model P (X t X t 1 ) and observation model P (Z t X t ). In order to evaluate a node s likelihood by a Bayesian estimation on the graph, we get from Eq. ( ): P (x t V t ) P (V t x t ) x t 1 P (x t x t 1 )P (x t 1 V t 1 ) (6.1) Note that because one of the observations is always the correct answer, x t gets only Z values, where Z is the total number of observations in the entire network. Hence, the above computation is calculated Z times. This results in a complexity of at least O(Z 2 ) for each time step. In the same simplified model, our algorithm has much lower complexity, since ZN << Z 2. When gaps between cameras are also considered in this framework, the comparisons at each time step must be made between max layers, the maximum time delay. In this case, the number of overall comparisons increases to O( max Z 2 T ). 6.2 Distributed Computation Our method is designed for a large network of cameras. The scalability of our algorithm can be further improved by a distributed implementation. In this section we discuss such an implementation. Assume each camera has simple processing ability, a tracking unit, and can communicate with its neighboring cameras, as well as with a central server. The prediction, observation, evaluation and diffusion steps can be locally computed by each of the cameras, whereas the normalization, output, 21

30 Chapter 6 / Complexity & Efficiency reconstruction and resampling should be performed centrally on a server (or a chosen leader). We next suggest one possible implementation. Each camera holds a set of particles: a set of visible particles for this camera location and a set of hidden particles which are en routed to that camera. The camera s processor produces identity vectors from the observation in its FOV at the relevant time window. The particles are then evaluated according to the observed vectors, and the results are sent to one of the leader processors (can be a central server or a leader from a group of cameras). Here each camera deletes its set of particles and receive a new one only upon decision of the leader. The leader receives the particle s weights from the cameras, normalizes them, computes the output state and reconstructed path, and resamples a new set of particles according to the new normalized weights. The new particles are then sent to the corresponding cameras and a diffusion and prediction processes are then applied on them locally. If a particle moved to another camera, the source camera inform its relevant neighbor about the new particle. After the prediction step ends, a tracking of the next time step begins. The leader is responsible for synchronizing the data it receives, according to the corresponding time steps. It is beyond the scope of this thesis to explore classical failures of the distributed system. Regarding the complexity of the distributed algorithm, the leader in such an implementation performs no comparisons between identity vectors. In addition, each camera that contains particles performs a constant number of comparisons that depends on Z and the average number of particles in each camera. Note that the regular algorithm performs O( ZN) vector comparisons. 22

31 Chapter 7 Confusion Measure In this chapter we define the confusion measure. The purpose of this measure is to estimate the difficulty of distinguishing a noisy target from other similar observations, i.e., what is the likelihood to confuse between the target and other objects. It takes into consideration only the observations, without topology information. We define it in probability terms. First, we define the relevant elements in the sample space. Then we define what is an error, and finally the measure, given a DB of observations and noisy instances of a target. The confusion measure is used in our experiments (Chapter 8) in order to compare between scenes. Let F be the set of all observations, f 0 be the target at time t = 0, and GT be the set of observations of f 0 over time. Consider a similarity measure s between two feature vectors, where s : F F [0, 1]. (Note that s is P s used in Eq.(4.2),Eq.(5.2),Eq.(5.3)). An observation f is a candidate to be misclassified as an observation of f 0, if f GT such that s(f, f 0 ) s(f, f ). Using the measure s, we formally define the sample space Ω as the set of observations that are candidates for misclassification of f 0 as: Ω = {f F : s(f, f 0 ) min f GT s(f, f 0 )}. (7.1) 23

32 Chapter 7 / Confusion Measure Then, the probability of incorrect classification is: P (Error DB, f 0 ) = GT c. (7.2) Ω Considering Ω as a non-symmetric sample space, we define the measure of a set A: A = w A s(w, f 0 ), (7.3) where A Ω. Thus, a set with high element similarities values will get a high measure. Given the distribution of element similarities, this can be computed by CM(DB, f 0 ) = J f DB(s)sds J (f GT (s) + f DB (s))sds, (7.4) where f GT is the histogram function of similarities between f 0 and GT and f DB is the histogram function of similarities between f 0 and Ω \ GT. J is the interval [a, 1] where a = min{x [0, 1] : f GT (x) > 0}. Fig. 7.1 presents an example of such histograms. For example, in order to compute the confusion measure given by the histograms in Fig. 7.1b, we look at the interval [0.6, 1]. The elements correspond to the bins in this region, are the sample space. We summarize the values on the blue curve (DB), each one weighted by the similarity value. This is the measure of the error set. We summarize the values on the red curve (GT ), weighted as well. This is the measure of the ground truth set. Together we have the measure of all the sample space. Dividing the measure of error, with that of the sample space, we get the probability of an error, which is the confusion measure. 24

33 Chapter 7 / Confusion Measure (a) (b) Figure 7.1: Similarity distributions of GT and DB (log scaled). (a) High separation results in low confusion measure of CM = (b) Lower separation results in higher confusion measure of CM =

34 Chapter 8 Experiments Due to the probabilistic nature of our algorithm, its power and advantages are likely to be evident only on large networks of cameras. In addition, our method depends on the quality of within-camera FOV tracking results which affect the confusion measure defined in Chapter 7. We therefore tested our method by extensive experiments on simulated data. Our algorithm is implemented in Matlab as a simulator of real environments. In this chapter we first describe the generation process of our environment, and later we present the results of the experiments on our algorithm. 8.1 Generating the Simulated Data Consider a city (e.g., London), where many cameras are installed all over the city. In such a scene, many people are walking from place to place along continuous paths. Each person appears in some cameras FOVs and disappear from view for a while. The path of a person across the network is the series of cameras he appeared in their FOVs chronologically. One of those persons is the target we want to track. The simulated data provides network of camera topology (spatial and temporal), and an observations data base (ODB). An ODB includes the path of M moving objects and their identity, f i, changing over time. 26

35 Chapter 8 / Experiments Topology Connections within the camera network are generated according to the desired topology. In our experiments we tested two different topological structures, a street and a Manhattan grid. In order to simulate a cameras network installed in a city, we generated a Manhattan grid topology. The cameras are connected linearly along a street path and at the junctions we have a clique of 4 nodes, as shown in Fig. 8.1a. The grid consists of cameras with blocks of the size 5 4, resulting in C = 502 cameras with average degree of 3.7. In order to test a camera network installed in a street, we simulate a topology with cameras on both sides of the street, where one camera s FOV is not large enough to identify objects on the other side. In the terms of the graph, it is possible for an object to move from a specific camera s FOV to a nearby camera without entering other s FOV. In other words, each camera has 4 adjacent cameras, as depicted in Fig. 8.1b. The average node degree here is 5 but the structure is more linear than of the grid topology. In our experiments we generated a street topology of 500 cameras. The probabilities of transitions between FOVs G mc are generated for each connection as well as on reflexive edges. In order to avoid a completely uniform transition prior, a small weight is given for going in a certain direction. Fig. 8.1 presents the transition probabilities of the Manhattan grid and the street topologies. The time delay pdf s are generated as normal distributions T i,j. In our experiments we define them by T i,j N(µ i,j, 0.2µ i,j ). The mean µ is randomly chosen according to a distribution of mean gaps determined as a scene parameter. We used a distribution with 20% gaps, 50% of them having a mean of 7 time steps and the others having a mean of 3 time steps. ODB The initial observed feature f i and the origin camera C i are randomly chosen for each of the M objects. To produce the scene dynamics, M paths over the network, imitating M moving objects, are randomly chosen. To do so, a location state is propagated, using the prediction step described in 5.1 and T i,j, G mc that were already generated. Repeating this process for each object T times provides us the population dynamics of our scene. One of those objects is randomly chosen to be the target. For generating the identity vector of each object, we use a multivariate normal distribution of an object identity vector. It is given by a covariance matrix Σ of a typical identity vector in the world, 27

36 Chapter 8 / Experiments (a) (b) Figure 8.1: Partial view of the networks topologies with one edge direction (self edges are not drawn). (a) Grid. (b) Street. The black dots are cameras and edge color represents a probability. and an object mean of f i. The identity features are changed over time according to Σ and to their first instance at time t = 0 which we assume to be the mean, for simplicity. At each time step t, the new identity vector of an object, f zt, is changed and defined as a linear combination of f zt 1 and f z0. Formally, f zt = γf z0 + (1 γ) f zt 1, (8.1) where 0 < γ < 1 and f zt 1 N(f zt 1, Σ). By this, the expectation of f zt is still f z0 but is not completely independent of its instance in the previous time step. In our experiments we used γ = 0.8. It should be noted that the confusion measure CM is an average estimation of the confusion complexity of an ODB, given a target to follow. It is possible, though, that the target will follow a harder or easier path. That is, a harder path may contain more similar observations than the average, and thus, yields a worse result. 8.2 Results The result of our algorithm is a sequence of states, each represents the location and identity of the target. The score of the algorithm is defined by the percentage of correct states along the sequence. Note that 28

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology

Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Learning the Three Factors of a Non-overlapping Multi-camera Network Topology Xiaotang Chen, Kaiqi Huang, and Tieniu Tan National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy

More information

Computer Vision 2 Lecture 8

Computer Vision 2 Lecture 8 Computer Vision 2 Lecture 8 Multi-Object Tracking (30.05.2016) leibe@vision.rwth-aachen.de, stueckler@vision.rwth-aachen.de RWTH Aachen University, Computer Vision Group http://www.vision.rwth-aachen.de

More information

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos

Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Definition, Detection, and Evaluation of Meeting Events in Airport Surveillance Videos Sung Chun Lee, Chang Huang, and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu,

More information

ECSE-626 Project: An Adaptive Color-Based Particle Filter

ECSE-626 Project: An Adaptive Color-Based Particle Filter ECSE-626 Project: An Adaptive Color-Based Particle Filter Fabian Kaelin McGill University Montreal, Canada fabian.kaelin@mail.mcgill.ca Abstract The goal of this project was to discuss and implement a

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Tracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods

Tracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods Tracking Algorithms CSED441:Introduction to Computer Vision (2017F) Lecture16: Visual Tracking I Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state,

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

To earn the extra credit, one of the following has to hold true. Please circle and sign.

To earn the extra credit, one of the following has to hold true. Please circle and sign. CS 188 Spring 2011 Introduction to Artificial Intelligence Practice Final Exam To earn the extra credit, one of the following has to hold true. Please circle and sign. A I spent 3 or more hours on the

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen

Stochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen Stochastic Road Shape Estimation, B. Southall & C. Taylor Review by: Christopher Rasmussen September 26, 2002 Announcements Readings for next Tuesday: Chapter 14-14.4, 22-22.5 in Forsyth & Ponce Main Contributions

More information

Multi-Camera Target Tracking in Blind Regions of Cameras with Non-overlapping Fields of View

Multi-Camera Target Tracking in Blind Regions of Cameras with Non-overlapping Fields of View Multi-Camera Target Tracking in Blind Regions of Cameras with Non-overlapping Fields of View Amit Chilgunde*, Pankaj Kumar, Surendra Ranganath*, Huang WeiMin *Department of Electrical and Computer Engineering,

More information

CSE 586 Final Programming Project Spring 2011 Due date: Tuesday, May 3

CSE 586 Final Programming Project Spring 2011 Due date: Tuesday, May 3 CSE 586 Final Programming Project Spring 2011 Due date: Tuesday, May 3 What I have in mind for our last programming project is to do something with either graphical models or random sampling. A few ideas

More information

Cs : Computer Vision Final Project Report

Cs : Computer Vision Final Project Report Cs 600.461: Computer Vision Final Project Report Giancarlo Troni gtroni@jhu.edu Raphael Sznitman sznitman@jhu.edu Abstract Given a Youtube video of a busy street intersection, our task is to detect, track,

More information

2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes

2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes 2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Multi-label classification using rule-based classifier systems

Multi-label classification using rule-based classifier systems Multi-label classification using rule-based classifier systems Shabnam Nazmi (PhD candidate) Department of electrical and computer engineering North Carolina A&T state university Advisor: Dr. A. Homaifar

More information

Efficient Acquisition of Human Existence Priors from Motion Trajectories

Efficient Acquisition of Human Existence Priors from Motion Trajectories Efficient Acquisition of Human Existence Priors from Motion Trajectories Hitoshi Habe Hidehito Nakagawa Masatsugu Kidode Graduate School of Information Science, Nara Institute of Science and Technology

More information

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models Prof. Daniel Cremers 4. Probabilistic Graphical Models Directed Models The Bayes Filter (Rep.) (Bayes) (Markov) (Tot. prob.) (Markov) (Markov) 2 Graphical Representation (Rep.) We can describe the overall

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Bayes Filter Implementations Discrete filters, Particle filters Piecewise Constant Representation of belief 2 Discrete Bayes Filter Algorithm 1. Algorithm Discrete_Bayes_filter(

More information

Extended target tracking using PHD filters

Extended target tracking using PHD filters Ulm University 2014 01 29 1(35) With applications to video data and laser range data Division of Automatic Control Department of Electrical Engineering Linöping university Linöping, Sweden Presentation

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information

Tracking Pedestrians using Local Spatio-temporal Motion Patterns in Extremely Crowded Scenes

Tracking Pedestrians using Local Spatio-temporal Motion Patterns in Extremely Crowded Scenes 1 Submitted to IEEE Trans. on Pattern Analysis and Machine Intelligence Regular Paper Tracking Pedestrians using Local Spatio-temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

Artificial Intelligence for Robotics: A Brief Summary

Artificial Intelligence for Robotics: A Brief Summary Artificial Intelligence for Robotics: A Brief Summary This document provides a summary of the course, Artificial Intelligence for Robotics, and highlights main concepts. Lesson 1: Localization (using Histogram

More information

Final Exam. Introduction to Artificial Intelligence. CS 188 Spring 2010 INSTRUCTIONS. You have 3 hours.

Final Exam. Introduction to Artificial Intelligence. CS 188 Spring 2010 INSTRUCTIONS. You have 3 hours. CS 188 Spring 2010 Introduction to Artificial Intelligence Final Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except a two-page crib sheet. Please use non-programmable calculators

More information

Practical Course WS12/13 Introduction to Monte Carlo Localization

Practical Course WS12/13 Introduction to Monte Carlo Localization Practical Course WS12/13 Introduction to Monte Carlo Localization Cyrill Stachniss and Luciano Spinello 1 State Estimation Estimate the state of a system given observations and controls Goal: 2 Bayes Filter

More information

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku Todd K. Moon and Jacob H. Gunther Utah State University Abstract The popular Sudoku puzzle bears structural resemblance to

More information

3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption

3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO., 1 3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption Yael Moses Member, IEEE and Ilan Shimshoni Member,

More information

Tracking Multiple Moving Objects with a Mobile Robot

Tracking Multiple Moving Objects with a Mobile Robot Tracking Multiple Moving Objects with a Mobile Robot Dirk Schulz 1 Wolfram Burgard 2 Dieter Fox 3 Armin B. Cremers 1 1 University of Bonn, Computer Science Department, Germany 2 University of Freiburg,

More information

Spatio-Temporal Stereo Disparity Integration

Spatio-Temporal Stereo Disparity Integration Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz

More information

FMA901F: Machine Learning Lecture 6: Graphical Models. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 6: Graphical Models. Cristian Sminchisescu FMA901F: Machine Learning Lecture 6: Graphical Models Cristian Sminchisescu Graphical Models Provide a simple way to visualize the structure of a probabilistic model and can be used to design and motivate

More information

Random projection for non-gaussian mixture models

Random projection for non-gaussian mixture models Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,

More information

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Yang Wang Tele Tan Institute for Infocomm Research, Singapore {ywang, telctan}@i2r.a-star.edu.sg

More information

CS4495 Fall 2014 Computer Vision Problem Set 6: Particle Tracking

CS4495 Fall 2014 Computer Vision Problem Set 6: Particle Tracking CS4495 Fall 2014 Computer Vision Problem Set 6: Particle Tracking DUE Tues, November 25-11:55pm Here you are going to experiment with a particle filter tracker such as was described in class. Recall that

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos

A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos A Spatio-Spectral Algorithm for Robust and Scalable Object Tracking in Videos Alireza Tavakkoli 1, Mircea Nicolescu 2 and George Bebis 2,3 1 Computer Science Department, University of Houston-Victoria,

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion Zhe Lin, Larry S. Davis, David Doermann, and Daniel DeMenthon Institute for Advanced Computer Studies University of

More information

Computer vision: models, learning and inference. Chapter 10 Graphical Models

Computer vision: models, learning and inference. Chapter 10 Graphical Models Computer vision: models, learning and inference Chapter 10 Graphical Models Independence Two variables x 1 and x 2 are independent if their joint probability distribution factorizes as Pr(x 1, x 2 )=Pr(x

More information

Discovery of the Source of Contaminant Release

Discovery of the Source of Contaminant Release Discovery of the Source of Contaminant Release Devina Sanjaya 1 Henry Qin Introduction Computer ability to model contaminant release events and predict the source of release in real time is crucial in

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM

CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM CSE 490R P1 - Localization using Particle Filters Due date: Sun, Jan 28-11:59 PM 1 Introduction In this assignment you will implement a particle filter to localize your car within a known map. This will

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR Detecting Missing and Spurious Edges in Large, Dense Networks Using Parallel Computing Samuel Coolidge, sam.r.coolidge@gmail.com Dan Simon, des480@nyu.edu Dennis Shasha, shasha@cims.nyu.edu Technical Report

More information

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems

Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Multi-Person Tracking-by-Detection based on Calibrated Multi-Camera Systems Xiaoyan Jiang, Erik Rodner, and Joachim Denzler Computer Vision Group Jena Friedrich Schiller University of Jena {xiaoyan.jiang,erik.rodner,joachim.denzler}@uni-jena.de

More information

3D Spatial Layout Propagation in a Video Sequence

3D Spatial Layout Propagation in a Video Sequence 3D Spatial Layout Propagation in a Video Sequence Alejandro Rituerto 1, Roberto Manduchi 2, Ana C. Murillo 1 and J. J. Guerrero 1 arituerto@unizar.es, manduchi@soe.ucsc.edu, acm@unizar.es, and josechu.guerrero@unizar.es

More information

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves

More information

Introduction to Image Super-resolution. Presenter: Kevin Su

Introduction to Image Super-resolution. Presenter: Kevin Su Introduction to Image Super-resolution Presenter: Kevin Su References 1. S.C. Park, M.K. Park, and M.G. KANG, Super-Resolution Image Reconstruction: A Technical Overview, IEEE Signal Processing Magazine,

More information

Probability Evaluation in MHT with a Product Set Representation of Hypotheses

Probability Evaluation in MHT with a Product Set Representation of Hypotheses Probability Evaluation in MHT with a Product Set Representation of Hypotheses Johannes Wintenby Ericsson Microwave Systems 431 84 Mölndal, Sweden johannes.wintenby@ericsson.com Abstract - Multiple Hypothesis

More information

2564 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 10, OCTOBER 2010

2564 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 10, OCTOBER 2010 2564 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 19, NO 10, OCTOBER 2010 Tracking and Activity Recognition Through Consensus in Distributed Camera Networks Bi Song, Member, IEEE, Ahmed T Kamal, Student

More information

New Models For Real-Time Tracking Using Particle Filtering

New Models For Real-Time Tracking Using Particle Filtering New Models For Real-Time Tracking Using Particle Filtering Ng Ka Ki and Edward J. Delp Video and Image Processing Laboratories (VIPER) School of Electrical and Computer Engineering Purdue University West

More information

Supervised texture detection in images

Supervised texture detection in images Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße

More information

Object Tracking with an Adaptive Color-Based Particle Filter

Object Tracking with an Adaptive Color-Based Particle Filter Object Tracking with an Adaptive Color-Based Particle Filter Katja Nummiaro 1, Esther Koller-Meier 2, and Luc Van Gool 1,2 1 Katholieke Universiteit Leuven, ESAT/VISICS, Belgium {knummiar,vangool}@esat.kuleuven.ac.be

More information

Probabilistic Learning Classification using Naïve Bayes

Probabilistic Learning Classification using Naïve Bayes Probabilistic Learning Classification using Naïve Bayes Weather forecasts are usually provided in terms such as 70 percent chance of rain. These forecasts are known as probabilities of precipitation reports.

More information

Bayes Classifiers and Generative Methods

Bayes Classifiers and Generative Methods Bayes Classifiers and Generative Methods CSE 4309 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 The Stages of Supervised Learning To

More information

THE classical approach to multiple target tracking (MTT) is

THE classical approach to multiple target tracking (MTT) is IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 5, MAY 2007 1589 A Bayesian Approach to Multiple Target Detection and Tracking Mark R. Morelande, Christopher M. Kreucher, and Keith Kastella Abstract

More information

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models Prof. Daniel Cremers 4. Probabilistic Graphical Models Directed Models The Bayes Filter (Rep.) (Bayes) (Markov) (Tot. prob.) (Markov) (Markov) 2 Graphical Representation (Rep.) We can describe the overall

More information

Tracking Soccer Ball Exploiting Player Trajectory

Tracking Soccer Ball Exploiting Player Trajectory Tracking Soccer Ball Exploiting Player Trajectory Kyuhyoung Choi and Yongdeuk Seo Sogang University, {Kyu, Yndk}@sogang.ac.kr Abstract This paper proposes an algorithm for tracking the ball in a soccer

More information

Automatic visual recognition for metro surveillance

Automatic visual recognition for metro surveillance Automatic visual recognition for metro surveillance F. Cupillard, M. Thonnat, F. Brémond Orion Research Group, INRIA, Sophia Antipolis, France Abstract We propose in this paper an approach for recognizing

More information

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision Object Recognition Using Pictorial Structures Daniel Huttenlocher Computer Science Department Joint work with Pedro Felzenszwalb, MIT AI Lab In This Talk Object recognition in computer vision Brief definition

More information

The Comparative Study of Machine Learning Algorithms in Text Data Classification*

The Comparative Study of Machine Learning Algorithms in Text Data Classification* The Comparative Study of Machine Learning Algorithms in Text Data Classification* Wang Xin School of Science, Beijing Information Science and Technology University Beijing, China Abstract Classification

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 5 Inference

More information

Tracking of Human Body using Multiple Predictors

Tracking of Human Body using Multiple Predictors Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,

More information

Quickest Search Over Multiple Sequences with Mixed Observations

Quickest Search Over Multiple Sequences with Mixed Observations Quicest Search Over Multiple Sequences with Mixed Observations Jun Geng Worcester Polytechnic Institute Email: geng@wpi.edu Weiyu Xu Univ. of Iowa Email: weiyu-xu@uiowa.edu Lifeng Lai Worcester Polytechnic

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

Evaluating Classifiers

Evaluating Classifiers Evaluating Classifiers Charles Elkan elkan@cs.ucsd.edu January 18, 2011 In a real-world application of supervised learning, we have a training set of examples with labels, and a test set of examples with

More information

Regularization and model selection

Regularization and model selection CS229 Lecture notes Andrew Ng Part VI Regularization and model selection Suppose we are trying select among several different models for a learning problem. For instance, we might be using a polynomial

More information

Maintaining accurate multi-target tracking under frequent occlusion

Maintaining accurate multi-target tracking under frequent occlusion Maintaining accurate multi-target tracking under frequent occlusion Yizheng Cai Department of Computer Science University of British Columbia Vancouver, V6T 1Z4 Email:yizhengc@cs.ubc.ca Homepage: www.cs.ubc.ca/~yizhengc

More information

Scale-invariant visual tracking by particle filtering

Scale-invariant visual tracking by particle filtering Scale-invariant visual tracing by particle filtering Arie Nahmani* a, Allen Tannenbaum a,b a Dept. of Electrical Engineering, Technion - Israel Institute of Technology, Haifa 32000, Israel b Schools of

More information

A noninformative Bayesian approach to small area estimation

A noninformative Bayesian approach to small area estimation A noninformative Bayesian approach to small area estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu September 2001 Revised May 2002 Research supported

More information

Online Tracking Parameter Adaptation based on Evaluation

Online Tracking Parameter Adaptation based on Evaluation 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Online Tracking Parameter Adaptation based on Evaluation Duc Phu Chau Julien Badie François Brémond Monique Thonnat

More information

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History Simon Thompson and Satoshi Kagami Digital Human Research Center National Institute of Advanced

More information

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz

Humanoid Robotics. Monte Carlo Localization. Maren Bennewitz Humanoid Robotics Monte Carlo Localization Maren Bennewitz 1 Basis Probability Rules (1) If x and y are independent: Bayes rule: Often written as: The denominator is a normalizing constant that ensures

More information

Scene Segmentation in Adverse Vision Conditions

Scene Segmentation in Adverse Vision Conditions Scene Segmentation in Adverse Vision Conditions Evgeny Levinkov Max Planck Institute for Informatics, Saarbrücken, Germany levinkov@mpi-inf.mpg.de Abstract. Semantic road labeling is a key component of

More information

Human Upper Body Pose Estimation in Static Images

Human Upper Body Pose Estimation in Static Images 1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project

More information

Efficient Feature Learning Using Perturb-and-MAP

Efficient Feature Learning Using Perturb-and-MAP Efficient Feature Learning Using Perturb-and-MAP Ke Li, Kevin Swersky, Richard Zemel Dept. of Computer Science, University of Toronto {keli,kswersky,zemel}@cs.toronto.edu Abstract Perturb-and-MAP [1] is

More information

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING

CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING CAMERA POSE ESTIMATION OF RGB-D SENSORS USING PARTICLE FILTERING By Michael Lowney Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Minh Do May 2015

More information

Markov Decision Processes and Reinforcement Learning

Markov Decision Processes and Reinforcement Learning Lecture 14 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Slides by Stuart Russell and Peter Norvig Course Overview Introduction Artificial Intelligence

More information

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM EXERCISES Prepared by Natashia Boland 1 and Irina Dumitrescu 2 1 Applications and Modelling 1.1

More information

Navigation methods and systems

Navigation methods and systems Navigation methods and systems Navigare necesse est Content: Navigation of mobile robots a short overview Maps Motion Planning SLAM (Simultaneous Localization and Mapping) Navigation of mobile robots a

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE Hongyu Liang, Jinchen Wu, and Kaiqi Huang National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science

More information

Car tracking in tunnels

Car tracking in tunnels Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern

More information

Counting People from Multiple Cameras

Counting People from Multiple Cameras Counting People from Multiple Cameras Vera Kettnaker Ramin Zabih Cornel1 University Ithaca, NY 14853 kettnake,rdz@cs.cornell.edu Abstract We are interested in the content analysis of video from a collection

More information

Application of Support Vector Machine Algorithm in Spam Filtering

Application of Support Vector Machine Algorithm in  Spam Filtering Application of Support Vector Machine Algorithm in E-Mail Spam Filtering Julia Bluszcz, Daria Fitisova, Alexander Hamann, Alexey Trifonov, Advisor: Patrick Jähnichen Abstract The problem of spam classification

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

Markov Random Fields and Segmentation with Graph Cuts

Markov Random Fields and Segmentation with Graph Cuts Markov Random Fields and Segmentation with Graph Cuts Computer Vision Jia-Bin Huang, Virginia Tech Many slides from D. Hoiem Administrative stuffs Final project Proposal due Oct 27 (Thursday) HW 4 is out

More information

Background subtraction in people detection framework for RGB-D cameras

Background subtraction in people detection framework for RGB-D cameras Background subtraction in people detection framework for RGB-D cameras Anh-Tuan Nghiem, Francois Bremond INRIA-Sophia Antipolis 2004 Route des Lucioles, 06902 Valbonne, France nghiemtuan@gmail.com, Francois.Bremond@inria.fr

More information

Data-driven Depth Inference from a Single Still Image

Data-driven Depth Inference from a Single Still Image Data-driven Depth Inference from a Single Still Image Kyunghee Kim Computer Science Department Stanford University kyunghee.kim@stanford.edu Abstract Given an indoor image, how to recover its depth information

More information

Particle Filters for Visual Tracking

Particle Filters for Visual Tracking Particle Filters for Visual Tracking T. Chateau, Pascal Institute, Clermont-Ferrand 1 Content Particle filtering: a probabilistic framework SIR particle filter MCMC particle filter RJMCMC particle filter

More information

Chapter 2 Basic Structure of High-Dimensional Spaces

Chapter 2 Basic Structure of High-Dimensional Spaces Chapter 2 Basic Structure of High-Dimensional Spaces Data is naturally represented geometrically by associating each record with a point in the space spanned by the attributes. This idea, although simple,

More information

Tracking by Cluster Analysis of Feature Points using a Mixture Particle Filter

Tracking by Cluster Analysis of Feature Points using a Mixture Particle Filter Tracking by Cluster Analysis of Feature Points using a Mixture Particle Filter Wei Du Justus Piater University of Liege, Department of Electrical Engineering and Computer Science, Institut Montefiore,

More information

Improving the Efficiency of Fast Using Semantic Similarity Algorithm

Improving the Efficiency of Fast Using Semantic Similarity Algorithm International Journal of Scientific and Research Publications, Volume 4, Issue 1, January 2014 1 Improving the Efficiency of Fast Using Semantic Similarity Algorithm D.KARTHIKA 1, S. DIVAKAR 2 Final year

More information

Predictive Indexing for Fast Search

Predictive Indexing for Fast Search Predictive Indexing for Fast Search Sharad Goel, John Langford and Alex Strehl Yahoo! Research, New York Modern Massive Data Sets (MMDS) June 25, 2008 Goel, Langford & Strehl (Yahoo! Research) Predictive

More information

Hidden Loop Recovery for Handwriting Recognition

Hidden Loop Recovery for Handwriting Recognition Hidden Loop Recovery for Handwriting Recognition David Doermann Institute of Advanced Computer Studies, University of Maryland, College Park, USA E-mail: doermann@cfar.umd.edu Nathan Intrator School of

More information

Continuous Multi-View Tracking using Tensor Voting

Continuous Multi-View Tracking using Tensor Voting Continuous Multi-View Tracking using Tensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California {jinmanka, icohen, medioni}@iris.usc.edu

More information

Workshop report 1. Daniels report is on website 2. Don t expect to write it based on listening to one project (we had 6 only 2 was sufficient

Workshop report 1. Daniels report is on website 2. Don t expect to write it based on listening to one project (we had 6 only 2 was sufficient Workshop report 1. Daniels report is on website 2. Don t expect to write it based on listening to one project (we had 6 only 2 was sufficient quality) 3. I suggest writing it on one presentation. 4. Include

More information

Bayesian Methods in Vision: MAP Estimation, MRFs, Optimization

Bayesian Methods in Vision: MAP Estimation, MRFs, Optimization Bayesian Methods in Vision: MAP Estimation, MRFs, Optimization CS 650: Computer Vision Bryan S. Morse Optimization Approaches to Vision / Image Processing Recurring theme: Cast vision problem as an optimization

More information