Probability Evaluation in MHT with a Product Set Representation of Hypotheses

Similar documents
Computer Vision 2 Lecture 8

A MATLAB TOOL FOR DEVELOPMENT AND TESTING OF TRACK INITIATION AND MULTIPLE TARGET TRACKING ALGORITHMS

Fusion of Radar and EO-sensors for Surveillance

Performance Evaluation of MHT and GM-CPHD in a Ground Target Tracking Scenario

Multiple Hypothesis Tracking For Multiple Target Tracking

Outline. Data Association Scenarios. Data Association Scenarios. Data Association Scenarios

Track Splitting Filtering Implementation for Terrain Aided Navigation

Maintaining accurate multi-target tracking under frequent occlusion

Multivariate Capability Analysis

The only known methods for solving this problem optimally are enumerative in nature, with branch-and-bound being the most ecient. However, such algori

A new parameterless credal method to track-to-track assignment problem

THE classical approach to multiple target tracking (MTT) is

Cost-Function-Based Gaussian Mixture Reduction for Target Tracking

Massively Multi-target Tracking for Objects in Clutter

Outline. Target Tracking: Lecture 1 Course Info + Introduction to TT. Course Info. Course Info. Course info Introduction to Target Tracking

2.3 Algorithms Using Map-Reduce

The Generalized Weapon Target Assignment Problem

Linear Methods for Regression and Shrinkage Methods

Clustering Using Graph Connectivity

Thomas R Kronhamn Ericsson Microwave Systems AB Mölndal, Sweden

Outline of this Talk

Association Pattern Mining. Lijun Zhang

3 INTEGER LINEAR PROGRAMMING

Contents. 1 Introduction. 2 Searching and Traversal Techniques. Preface... (vii) Acknowledgements... (ix)

Artificial Intelligence. Programming Styles

Data Mining. ❷Chapter 2 Basic Statistics. Asso.Prof.Dr. Xiao-dong Zhu. Business School, University of Shanghai for Science & Technology

straints, specific track selection strategies may be required to reduce the processing time. After a brief description of the mathematical formalism a

Unsupervised Learning and Clustering

Outline. EE793 Target Tracking: Lecture 2 Introduction to Target Tracking. Introduction to Target Tracking (TT) A Conventional TT System

Supervised vs unsupervised clustering

Regularization and model selection

Passive Differential Matched-field Depth Estimation of Moving Acoustic Sources

Cs : Computer Vision Final Project Report

Unsupervised Learning and Clustering

Multisensor Data Fusion Using Two-Stage Analysis on Pairs of Plots Graphs

Workshop 8: Model selection

Extended target tracking using PHD filters

TELCOM2125: Network Science and Analysis

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking

Motion Detection. Final project by. Neta Sokolovsky

V4 Matrix algorithms and graph partitioning

Predictive Analysis: Evaluation and Experimentation. Heejun Kim

Artificial Intelligence for Robotics: A Brief Summary

Generalized Weapon Target Assignment Problem

Introduction to Indexing R-trees. Hong Kong University of Science and Technology

Biclustering with δ-pcluster John Tantalo. 1. Introduction

Week 7 Picturing Network. Vahe and Bethany

Wake Vortex Tangential Velocity Adaptive Spectral (TVAS) Algorithm for Pulsed Lidar Systems

Det De e t cting abnormal event n s Jaechul Kim

Towards direct motion and shape parameter recovery from image sequences. Stephen Benoit. Ph.D. Thesis Presentation September 25, 2003

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

2. Discovery of Association Rules

CS Introduction to Data Mining Instructor: Abdullah Mueen

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

Giovanni De Micheli. Integrated Systems Centre EPF Lausanne

Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski

Symbol Table. Symbol table is used widely in many applications. dictionary is a kind of symbol table data dictionary is database management

Non-convex Multi-objective Optimization

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Week 02 Module 06 Lecture - 14 Merge Sort: Analysis

Statistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte

Multi-label classification using rule-based classifier systems

Radar Detection Improvement by Integration of Multi- Object Tracking

Week - 03 Lecture - 18 Recursion. For the last lecture of this week, we will look at recursive functions. (Refer Slide Time: 00:05)

3.2 Level 1 Processing

Weka ( )

Detecting and Tracking Moving Objects for Video Surveillance. Isaac Cohen and Gerard Medioni University of Southern California

Unsupervised Learning

Semi-Supervised Clustering with Partial Background Information

On Covering a Graph Optimally with Induced Subgraphs

Automated Video Analysis of Crowd Behavior

ECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov

7. Decision or classification trees

An Algorithm for Mining Large Sequences in Databases

Solution for Homework set 3

Lecture Notes for Chapter 2: Getting Started

Empirical risk minimization (ERM) A first model of learning. The excess risk. Getting a uniform guarantee

An Efficient Message Passing Algorithm for Multi-Target Tracking

Feature Selection Using Modified-MCA Based Scoring Metric for Classification

A Parallel Implementation of a Higher-order Self Consistent Mean Field. Effectively solving the protein repacking problem is a key step to successful

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

This paper describes an analytical approach to the parametric analysis of target/decoy

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

Multiple Pedestrian Tracking using Viterbi Data Association

The Size Robust Multiple Knapsack Problem

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Chapter 12: Indexing and Hashing. Basic Concepts

Lecture 26. Introduction to Trees. Trees

Introduction to Algorithms

On the Optimality of the Neighbor Joining Algorithm

Introduction to ANSYS DesignXplorer

For searching and sorting algorithms, this is particularly dependent on the number of data elements.

OPTIMIZATION. joint course with. Ottimizzazione Discreta and Complementi di R.O. Edoardo Amaldi. DEIB Politecnico di Milano

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

Algorithmic patterns

DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES

Chapter 12: Indexing and Hashing

CLASS: II YEAR / IV SEMESTER CSE CS 6402-DESIGN AND ANALYSIS OF ALGORITHM UNIT I INTRODUCTION

UNIT 4 Branch and Bound

Transcription:

Probability Evaluation in MHT with a Product Set Representation of Hypotheses Johannes Wintenby Ericsson Microwave Systems 431 84 Mölndal, Sweden johannes.wintenby@ericsson.com Abstract - Multiple Hypothesis Tracking algorithms that rely on hypothesis probabilities for pruning typically generates the n-best global hypotheses. In some cases, the probability mass is diffuse in the space of global hypotheses and a large n is desirable, implying a high computational demand. In this work, we present an alternative method for evaluation of hypothesis probabilities. Global hypotheses are then represented with exclusive product sets. Each product set has the potential of representing many global hypotheses. A method that generates the product sets is introduced, including a recursive formulation for computational tractability. In numerical evaluations, the method is compared to an optimization based method that generates the n-best hypotheses. Both an improved ability of representation, and a reduced computational demand are demonstrated in a constructed example. Keywords: Tracking, data association, Multiple Hypothesis Tracking, hypothesis pruning. 1 Introduction There are several approaches to hypothesis pruning and maintenance in Multiple Hypothesis Tracking (MHT). In Structured Branching MHT, and the extensions of Murty s method to MHT, a set of global and feasible assignment hypotheses are generated to approximately calculate the probabilities of the assignment hypotheses. The probabilities form a base for efficient pruning strategies. Moreover, they provide information on the ability to resolve association ambiguities. The quality of the probability calculation depends on the maximum number of global assignment hypotheses that can be generated or maintained in parallel. In certain situations, the probability mass is diffuse in the space of global hypotheses. The desired number of global hypothesis needed for satisfactory support is then potentially large. Such situations may occur in stretched assignment clusters with multiple tracked targets in environments with high clutter density and disturbing objects. Assignment clusters with many targets occur for example in ground target tracking, and in tracking of aircrafts given range and Doppler ambiguous data from a medium PRF radar. Realtime and computational constraints limit the maximum number of global hypotheses. Thus, there are motivations for studying alternatives to existing methods for computation of hypothesis probabilities. In particular, we are interested in methods that can support probability distributions with larger spread. In this paper, a set-based method for generating and storing global hypotheses is presented. The main ideas of the paper are concluded as follows: (i) Assume an MHT-algorithm where competing histories of observation-to-track assignment, here denoted as track-hypotheses, are organized target-wise in trees, see Section 2. Track-hypotheses are in conflict if they include the same observation. They are then denoted as incompatible. A feasible global hypothesis is a set of compatible track-hypotheses including one trackhypothesis from each target-tree. Observations not participating in the global hypothesis are regarded as false under the hypothesis. The idea is to form the space of all global hypotheses as a product space of track-hypotheses from each target-tree, see Section 3. One dimension in the product space corresponds to the track-hypotheses in one target-tree. The feasible set in this space is the union of all feasible global hypotheses. (ii) The feasible set of global hypotheses can favorably be represented with a union of exclusive product sets of track-hypotheses. There will be at least one or more track-hypotheses from each target-tree per product set. (iii) The unions of exclusive product sets are generated from the compatibility constraints of trackhypotheses from different trees. This can be done in several ways. A proposed method sequentially traverses the compatibility matrix [1, Chapter 7], successively imposing compatibility constraints via set intersections, see Section 3.1. To withstand a potential combinatorial explosion of the number of product sets, the product sets are ordered according to their summed probability mass at each intersection, and only the n product sets with the largest probabilities are selected and carried on to a proceeding iteration in the algorithm. This method corresponds to an approximate, ranked assignment procedure. (iv) Given the product-set representation of the globally feasible hypotheses, the marginal probabilities of the track-hypotheses are calculated for each targettree respectively. These probabilities are then used for pruning and presentation purposes. (v) Step (iii) is made recursive by

1. allowing the product space to grow and shrink with the size of the target-trees. The definition of the product space changes at pruning, when new measurements arrive, or when new target-trees are included in an assignment cluster. 2. recalculating the product sets from prior time instants when the product space changes its definitions. 3. imposing compatibility constraints from a new frame of observations. The following steps apply: a) Generate a new set of product-sets including compatible, global hypotheses given the compatibility constraints from the new observations only. This is done efficiently either with a series of intersection steps as described in (iii) above, or with an optimization based ranked assignment procedure [2]. b) Intersect the new set of product-sets from a) with the prior set of product-sets from 2. Computationally, step 3 is typically the most demanding. However, in applications of interest (multiple targets and clutter) the computational demand of this step often compares favorably to Murty s algorithm extended to MHT, see Section 5. While Murty s algorithm produces the n-best posteriori global hypotheses, the method herein produces the n-best productsets with similar or lower complexity depending on the situation. The paper is organized as follows: In Section 2, the MHT framework used in this work is presented together with the chosen notation. Section 3 introduces the product-set based method, and a recursive form is derived in Section 4. In Section 5, numerical comparisons are made to a reference implementation based on Murty s ranked assignment procedure. 2 Multiple Hypothesis Tracking framework A history of observation-to-track assignments is here denoted as a track-hypothesis. State filtering is carried out per track-hypotheses given the assignment history. Similar to [3], we choose to organize track-hypotheses in a set of target-trees, one tree for each detected target. The root of a tree represents an initial detection of a new target. Further, each branch is an ambiguous assignment history (track-hypothesis) of that particular target. Only one of the track-hypotheses in a single tree can be valid. In the case of multiple trees, trackhypotheses from different trees might contain the same observation. These track-hypotheses are then said to be incompatible. Only one track-hypothesis in a group of incompatible track-hypotheses can be valid. Denote a track-hypothesis as h ij, where i is an index of the tree, and j is the index of the track-hypothesis, given the tree. A global hypothesis is a set of track-hypotheses including exactly one track-hypothesis from each tree. Each global hypothesis represents an alternative interpretation of the origin of a sequence of received data batches. Denote a global hypothesis as H k, where k enumerates the global hypotheses. If there are n target-trees, global hypothesis k is written as, H k = {h 1j1k, h 2j2k,..., h njnk } Observations not assigned to any track are assumed to be false under the hypothesis. A global hypothesis is feasible if it only contains compatible track-hypotheses. Consider the example with ambiguous observation-totrack assignments in Figure 1. The relations between track-hypotheses and global hypotheses in the example are shown in Figure 2. x1 x2 x3 x4 x 5 x6 x 7 x8 x9 x1 x11 Figure 1: An example with three tracked targets and ambiguous observation-to-track assignments at times t 1, and t. The crosses represent the observations, and the numbers represent the observation numbers. t-2 t-1 1 2 3 4 5 5 8 8 9 9 9 1 1 ; 11 h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 H 1 H 2 H 3 Figure 2: Relation between track-hypotheses, targettrees, and global hypotheses in the scenario in Figure 1. An MHT algorithm that maintains the hypotheses structures operates in principle as follows: At the reception of a batch of new observations, all track-hypotheses potentially correlating with the new batch are predicted to the generation times of the detections. Unlikely correlations are excluded with clustering and gating. Likelihoods of observation to track assignments are established for each track-hypothesis. Unlikely assignments are disqualified with gating. Track-hypotheses, and thus target-trees, are expanded given the new assignments. The most likely global hypothesis, or a ranked set of global hypotheses are generated. 6 t 7

Based on the global hypotheses, the target-trees are pruned, for example via, N-scan pruning, given the most likely global hypothesis. Remove all track-hypotheses which do not have the same observation to track assignment N scans back in time as the track-hypothesis part of the most likely global hypothesis. Probability based pruning. A ranked set of global hypotheses are used to estimate the probabilities of the global hypotheses. The marginal probabilities of the trackhypotheses are then calculated for each target-tree. Track-hypotheses with marginal probability below a threshold are removed. Track filters for remaining track-hypotheses are updated with the new detections. Note that there are many variations of how these steps are implemented, ordered, and combined. The probability based pruning is here assumed superior to N-scan pruning in scenarios with clutter or interacting targets. The pruning method is more selective, leading to less number of parallel trackhypotheses needed to be predicted and updated. On the other hand, the calculation of probabilities results in a computational overhead. The probabilities of the global hypotheses are calculated with Bayes rule, which relies on the hypothesis likelihoods. Denote the likelihood of track-hypothesis h ij with p hij. A standard expression for p hij is assumed [1, Chapter 6.2], and the details are excluded from here. Typically, the likelihood is built up multiplicatively over a sequence of observation batches with components based on, e.g., the likelihoods of measurement residuals, and the probability of a target detection. Important for the proposed method is that the likelihood of a global hypothesis can be written as a product of track hypothesis likelihoods, p Hk = 1 c p h 1j1k p h2j2k... p hnjnk. (1) The constant c is a common factor of all global hypotheses and has no effect when applying Bayes rule (c represents for example the likelihood of all measurements being false alarms, and dividing with this factor allows the multiplicative form of (1) given the standard formulation). Assume that the a priori probability of hypothesis H k also is expressed per track-hypothesis, and included multiplicatively in the likelihood expression (1). Denote the set of all feasible, global hypotheses as Ω f. Applying Bayes rule now gives, P (H k ) = p Hk H l Ω f p Hl, H k Ω f. (2) Infeasible hypotheses have probability zero. The marginal probability of track-hypothesis i in tree j is then P (h ij ) = P (H k ). (3) {H k h ij H k } 3 Product-sets representation of global hypotheses In the formulation of MHT above, it is suitable to represent the space of all global hypotheses as a product space. Each dimension of the space corresponds to one target-tree. In the example with three target-trees, each consisting of three track-hypothesis, the space of all global hypotheses is Ω = {h 11, h 12, h 13 } {h 21, h 22, h 23 } {h 31, h 32, h 33 }. Assume the incompatibilities between trackhypotheses presented in the example in Figures 1 and 2. The incompatibilities restrict the set of feasible global hypotheses according to Figure 3. Generally, the feasible set can be intricate and of both small and large sizes. In some situations, the probability mass is spread out in the space over a large set of global hypotheses, and the mass is difficult to represent with a small set of components. However, instead of using single hypotheses as building blocks, an idea is to use exclusive product-sets. There are many ways of dividing the feasible set into exclusive product-sets. A method suggested herein is based on the compatibility matrix. A description is given below. h 33 h 32 h 31 h 23 h 22 h 21 h 11 h 12 h 13 Figure 3: The space of global hypotheses from the example in Figures 1 and 2. The bars denote the incompatible hypotheses. The set of feasible hypotheses is the part of the space not including any bar. 3.1 Generation of product sets from the compatibility matrix The assignment conflicts between track-hypotheses can be represented in a matrix form denoted the compatibility matrix. The example in Figure 2 results in the following compatibility matrix: h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 h 11 1 1 h 12 1 1 1 h 13 1 1 1 1 h 21 1 1 1 1 h 22 1 1 1 h 23 1 1 1 h 31 1 1 1 h 32 1 1 h 33 1 1

A one at a position marks that two hypotheses are incompatible. All hypotheses within a target-tree are incompatible per default. The method for the generation of product sets uses the fact that the incompatibilities in one row of the matrix results in a product-set of global hypotheses. For instance, the first three rows give the following three product sets, S 11 = {h 11 } {h 21, h 22, h 23 } {h 31, h 32, h 33 } S 12 = {h 12 } {h 22, h 23 } {h 31, h 32, h 33 } S 13 = {h 13 } {h 23 } {h 31, h 32, h 33 }, A set S ij represents a slice of the global hypothesis space, including the compatibility constraints from the corresponding row in the matrix. The slices of a particular target-tree i are exclusive. It is therefore possible to represent the union S i = j S ij (e.g., S 1 = S 11 S 12 S 13 ) efficiently with a list of the exclusive product sets. Slices Sij without any compatibility constraints are merged to form one set per target-tree i. The feasible set Ω f can now be expressed as the sequence of intersections, Ω f = i S i = i ( j S ij ) = ((( j S 1j j S 2j ) j S 3j )...). (4) The intersection of two product sets is simply a new product set. In the intersection of two unions S 1 and S 2, each product set in S 1 must be intersected with each product set in S 2. Thus, with the repeated intersections of the unions in (4) the number of product sets multiplies, potentially leading to an explosion. To overcome this explosion, only the n product sets with the highest summed likelihood are carried along to the next intersection. An efficient q-select algorithm applies (q-select is a modification of q-sort that avoids the sorting). The summed likelihood are easily calculated for product sets. For example, the summed likelihood of S 12 is p S12 = p h12 (p h22 + p h23 ) (p h31 + p h32 + p h33 ). (5) The sizes of the product sets are reduced with each dimension-wise slicing. For representation performance and speed, the product sets should desirably be as large as possible. Thus, there are motivations to study other methods. The recursive formulation method presented in Section 4 tends to produce larger product sets. After the final intersection, the marginal trackhypothesis probabilities are calculated. The procedure in (3) is easily extended to handle the product-set representation. A product set is suitably implemented with an array of unsigned integers. Each element in the array corresponds to one target tree, and each bit position maps to a track-hypothesis in the target tree. A one at a position implies that the track-hypothesis is part of the product set. Intersection of two product sets then corresponds to an element-wise AND-operation between the two arrays. Using a 64 bit representation, we can handle 64 hypotheses per target-tree which is deemed sufficient. There is a risk that the select operation at an early stage of the slicing procedure excludes many product sets which at the final stage would take part among the n best sets. From empirical observation, we have seen that this risk typically is moderate. At situations with high risk, there is a choice to switch to optimization based hypothesis generation. We expect that the product-set and the optimization based methods have different preferred operational regions, and can be designed to cooperate in scenarios with dynamically varying conditions. The product-set method should work better in stretched target clusters with a cluttered background, leading to relatively few incompatibilities between the target-trees. On the other hand, the optimization based method should quickly find scattered islands of feasible global hypotheses in a space with many incompatibilities. The latter situation may occur for densely spaced target clusters. 4 Recursive formulation For computational tractability, a recursive version of the product-set method is developed. Divide the observations into two groups corresponding to those belonging to past observation batches, and those belonging to the present observation batch. The incompatibilities of track-hypotheses are either due to the past or present observation. Past observations produce one compatibility matrix, and a prior set of feasible hypotheses, while the present observations produce a second matrix, and a second feasible set. Denote the prior feasible set at time t as Ω f (t t 1), and the present feasible set as Ω f,p (t). The desired, combined feasible set is, Ω f (t t) = Ω f (t t 1) Ω f,p (t), (6) where the intersection is carried out as in (4). The set Ω f (t t 1) is generated from Ω f (t 1 t 1) by taking into account the following changes in definitions of the global hypothesis space: A track-hypotheses is pruned. The trackhypothesis is removed from all product sets in Ω f (t 1 t 1) containing the track-hypothesis. A track-hypothesis spawns new track-hypotheses in the correlation phase. All product sets containing the prior track-hypotheses are expanded to include the spawned track-hypotheses. A track-tree is deleted. The dimension corresponding to the track-tree can simply be removed if there is one dominant hypothesis in the tracktree (e.g., a coasting hypotheses with no assigned detections the last N scans, or a null-hypothesis at initiation). Otherwise, the list of product sets must be traversed to assure exclusiveness of the product sets. A track-tree is added. The compatibilities of the new track-tree and the prior hypothesis space are expressed with a list of product sets, and an intersection with Ω f (t 1 t 1) is carried out.

Two product spaces are merged. The situation occurs when two previously independent assignment clusters are merged due to new assignment constraints. First, the two prior hypothesis sets are expanded into the merged space. Second, the compatibilities between the two prior hypothesis spaces are expressed with a new list of product sets defined in the merged space. Third, the new list is intersected with one of the prior sets, and the result from that operation is then intersected with the second prior set. The most common operations are pruning and spawning. With a bit representation of the product sets, the operations are fast. Any bit maps to either none or a set of other bits. The set Ω f,p (t) is possible to form as in Section 3.1, based on the conflict matrix given the present batch of observations. However, the problem is also favorably collapsed and posed as a 2D ranked assignment problem. That is, every track-hypothesis in a target-tree which is correlated to an observation in the present batch is represented with one collapsed track-hypothesis. There will be a maximum of one collapsed track-hypothesis per observation and track-tree. A ranked assignment problem is then solved between the observations and the track-trees. There are alternatives to how the assignments are scored: 5 Numerical comparison to Murty s MHT method The properties of the product-set representation and algorithms are of interest. As an MHT method, the performance is similar to other MHT methods, and we will not present any results from such evaluations since they are as expected: Multiple target scenarios and higher clutter levels can be handled to a much better degree than single hypothesis methods. More interesting is the relation in computational demand to other methods, and the method s key ability to represent probability mass in the global hypothesis space. For these matters, a scenario based on seven parallel targets on the ground was chosen. The scenario is not related to any real scenario and was designed for demonstration purposes. In particular, the scenario results in a diffuse probability distribution over global hypothesis. The target trajectories are shown in Figure 4. An airborne radar is located 4 km west of the targets. The update period is 1 seconds and the measurement standard deviation is 1m in range and 4mrad in azimuth. A background clutter level β fa of either.5,.5 or 5 false alarms per square kilometer is assumed. The targets move with 2m/s in the x-direction. All targets are assumed being tracked at the start of a simulation. Use the summed track-hypothesis likelihood for the collapsed hypotheses. This is not a good choice since the incompatibilities from prior time instants are not included at all in the ranking. Use the scoring method of Cheap JPDA, see [4]. y [m] Consider one target-tree at a time, and marginalize the effects of the compatibility constraints from the prior time instants. The result already exists in P (h ij ) from time instant t 1. The likelihood of a measurement z being generated by the target associated to the target-tree i is then x [m] p(z i) = j p(z h ij )P (h ij ). (7) Figure 4: A scenario with seven parallel targets in clutter for evaluations. We have only used the last scoring method. As a ranked assignment procedure for the collapsed problem, we suggest either using the method in Section 3.1, or an optimization based procedure. In any case, the result is expanded back to the original definitions of the hypothesis space by reversing the collapse, and Ω f,p (t) is produced. The recursive framework has been implemented as a core in a bi-level MHT algorithm, and tested in ground scenarios with clutter. It has proven to be fast and reliable in the tested application. Though, in the scenarios, the number of track-hypotheses per track was kept low ( 2) due to computational demand in filter predictions and updates. The evaluations are made for the recursive version of the product-set based method presented in Section 4, denoted as A1. As a ranked assignment procedure to generate Ω f,p (t), the slicing method in Section 3 is used. The maximum numbers of product sets in both Ω f (t t) and Ω f,p (t) are set to 5. All trackhypotheses with probability less than.2 are pruned (pruning done after probability evaluation). The maximum number of track-hypotheses per tree is set to 4 (pruning done per target-tree before probability evaluation). As a reference, Murty s ranked assignment algorithm applied to MHT has been implemented (following [2] and [1]). The method is denoted as A2. After

the reception and correlation of each batch of observations, the n-best global hypotheses are generated from the prior n-best global hypotheses. The algorithm initially solves the optimal assignment problem for each prior global hypothesis. The problem-solution pairs are sorted according to their score, and Murty s algorithm then traverses the resulting list to find the second, third, etc. solution by modifying the problems so that already found solutions are excluded. The modified problems and their solutions are re-inserted into the list. A standard auction algorithm forms the base with the extension that a solution (including prices) is remembered for each solved problem. When the problems are modified, the new solutions are found quickly. There are no mechanism added to avoid price wars [5]. The algorithm can be speeded up by modifications according to [6]. However, the initial n solutions of full size assignment problems are still required. In the evaluations, the number of global hypothesis A2 is set to 5, and the same pruning strategy as in the productset method is applied. Code for probability evaluation in both A1 and A2 is written in Java and run from matlab. Representation of probability mass: The algorithms abilities to represent the probability mass in the feasible set Ω f is of interest. Specifically, we should compare the summed mass in Ω f (t t) of A1 with the summed mass of the global hypotheses in A2. Unfortunately, as a consequence of deviating pruning decisions, A1 and A2 will have different space definitions. A direct comparison is thus inappropriate. In this particular scenario however, the product-set representation of A1 works fine and gives approximately the complete set Ω f. The product-sets can thus be approximated as a superset of the 5 best global hypotheses for all time instants. Therefore, the summed mass of the 5 best global hypotheses, extracted from the product representation, reflects the representation abilities of A2. Figure 5 shows this mass normalized with the summed mass of Ω f (t t) for the clutter densities, β fa = {.5,.5, 5}fa/km 2, as a function of time. The curves are based on 5 Monte Carlo simulations. When β fa =.5fa/km 2, both algorithms are able to fairly represent the mass. There is little difference since the ratio is one. When β fa = 5fa/km 2, the probability mass is diffuse in the space, and the representational properties of A2 is limited. As a consequence, the ability to sustain multiple hypotheses for all targets is reduced. Although the example is constructed, these effects are present in other situations with clutter and stretched target clusters. Execution time: The time to evaluate the probabilities is measured for both methods at each time step (the time to compute predictions and updates of track-hypotheses is excluded). In figure 6 the execution times per iteration are shown for both A1 and A2, given β fa = {.5,.5, 5}fa/km 2. With the settings herein, A1 is faster for all cases. Note that A1 represents the probability mass better, resulting in a larger Represented probability mass by 5 best global hypotheses 1.9.8.7.6.5.4.3.2.1 β fa =5/km 2 β fa =.5/km 2 β fa =.5/km 2 2 4 6 8 1 12 14 16 18 time Figure 5: Represented part of the probability in the space of global hypotheses by the top 5 hypotheses, given the scenario of Figure 4. hypothesis space to operate in. If the representation abilities were calibrated, i.e., by lowering the number of product-sets in A1, the difference in execution time increases. In Figure 7, the average execution time per iteration of A1 is shown as a function of the maximum number of product sets. The scenario is the same as above with β fa = 5fa/km 2. The average is over the last 1 seconds of the scenario including 1 Monte Carlo rounds per sample. A linear characteristic is dominant. There are quadratic terms present, such as the intersection of lists of product sets, and we can not expect a linear characteristics generally for other scenarios. Figure 8, shows the average execution time of A1 per iteration as a function of the number of targets. The same conditions as in the generation of Figure 7 applies, with the difference that the maximum number of product sets is 5 for all cases. Again, a linear characteristic is dominant. This property is understandable, since adding one new target implies one new intersection of a list of product sets. More testing is required to examine the differences between A1 and A2. For instance, the sensitivity to the following properties are of interest: Density of targets in cluster. A high density inflicts many incompatibilities, and A2 should work better in comparison. Clutter density, affecting the growth factor of the target-trees and the diffuseness of the probability mass. Pruning strategy and the maximum allowed number of track-hypotheses. We expect that A2 extended with ideas from [6] is faster than A1 in some parts of the parameter space, for instance at closely spaced targets with little clutter. However, there are potentials to improve A1 as well, e.g., using interleaved sort operations and upper bounds on the probability mass in product sets, or generating Ω f,p (t) according to [6]. Further comparisons are left for future work.

Mean execution time.3.25.2.15.1.5 A1, β fa =5/km 2 A2, β fa =5/km 2 A1, β fa =.5/km 2 A2, β fa =.5/km 2 A1, β =.5/km 2 fa A2, β fa =.5/km 2 Average execution time[s].1.9.8.7.6.5.4.3.2.1 2 4 6 8 1 12 14 16 18 time 1 2 3 4 5 6 7 Number of targets Figure 6: Execution times of A1 and A2 in the scenario in Figure 4. Only the evaluation of probabilities is included, not the prediction and updating of trackhypotheses..2.18 Figure 8: Average execution time of A1 as a function of number of targets, N. The targets in Figure 4 are enumerated from top to bottom, and N < 7 corresponds to the N top targets. The average is over the last 1 seconds of the scenario, and 1 Monte Carlo simulations. Further, we have that β fa = 5fa/km 2, and the maximum number of product sets is 5. Average execution time.16.14.12.1.8.6.4.2 implied). Conditioned on the scenario, the product-set method showed satisfying performance in this respect. The product-set method is also faster. In the future, the product-set method should be subject to further development, testing and comparisons. In particular, an investigation of computational complexity is needed. 2 3 4 5 6 7 8 9 1 Maximum number of product sets Figure 7: Average execution time of A1 as a function of the maximum number of product set. The average is over the last 1 seconds of the scenario, and 1 Monte Carlo simulations. The scenario is the same as above with β fa = 5fa/km 2. 6 Conclusions An MHT method is introduced in which global hypotheses are represented with a list of exclusive product sets. The aim of the method is to compute the probabilities of track-hypotheses for pruning purposes. The product-set representation is motivated from the desire to improve the support of a diffuse probability mass in the space of global hypotheses, and to reduce computational demand. A low-level style of programming based on bit operations is applicable, and a recursive formulation exists resulting in a computationally tractable method. The method has successfully been implemented in an MHT framework with seemingly good performance. The product-set method was compared numerically to an MHT method based on Murty s algorithm. It was demonstrated that a conventional MHT method potentially lacks the ability to completely represent the probability mass in a situation with multiple targets in clutter (though, a poor performance is not necessarily References [1] S.S. Blackman. Design and Analysis of Modern Tracking Systems. Artech House, 1999. [2] I.J. Cox and M.L. Miller. On finding ranked assignments with applications to multitarget tracking and motion correspondence. IEEE Transactions on Aerospace and Electronic Systems, 31(1), 1995. [3] T. Kurien. Multitarget Multisensor Tracking: Advanced Applications, chapter 3. Issues in the design of practical multitarget tracking algorithms. Arthech House, 199. [4] H. Quevedo, S.S. Blackman, T. Nichols, R. Dempster, and R. Wenski. Reducing MHT computational requirements through use of Cheap JPDA methods. Signal and Data Processing of Small Targets, SPIE proceedings, 4473, 21. [5] D. Castañon. New assignment algorithms for data association. Signal and Data Processing of Small Targets, SPIE proceedings, 1698, 1992. [6] M.L. Miller, H.S. Stone, and I.J. Cox. Optimizing Murty s ranked assignment method. IEEE Transactions on Aerospace and Electronic Systems, 33(3), 1997.