Bayesian Motion Models of LPB

Lin & Goodrich at Brigham Young are working on Bayesian motion models for generating probability maps. They have an interesting model, but need GPS tracks to train it. It's a nice complement to our approach, and it will be interesting to see how they compare.

~Originally a very cool review published in the first half of 2010. The review led to phone calls and a very productive collaboration on MapScore and other work.

Partly reconstructed March 2012.

The citation is:

Lin, Lanny and Goodrich, Michael A. 2009. "A Bayesian approach to modeling lost person behaviors based on terrain features in Wilderness Search and Rescue."  In Proceedings of the 18th Conference on Behavior Representation in Modeling and Simulation, Sundance, UT, 31 March -- 2 April 2009, pp.49-56.

This was later expanded to:

L. Lin and M. A. Goodrich. A Bayesian Approach to Modeling Lost Person Behaviors Based on Terrain Features in Wilderness Search and Rescue. In Computational and Mathematical Organization Theory, 3, pp 300-323, 2010.

Overview

Lin & Goodrich at Utah are also working on Bayesian probability maps of lost person behavior. They use a moving-person model, estimating the tendency for a person to go uphill vs downhill vs level, or to transition from one terrain/vegetation to another. This is most closely related to 1988 Java work by Castle [at archive.org | at NPS] and circa 2000 ArcGIS work by Hugh Round (as an undergrad project). A lost person is represented as a set of transition matrices that give their propensity to make each kind of transition. We'll call this artificial lost person a sim.

To generate the probability map, thousands of sims are dropped in or near the initial planning point (IPP) and allowed to wander across the map according to their propensities. The resulting locations generate a probability map.

What makes their model Bayesian is that they provide broad "expert" priors for the propensities, and then refine these given actual GPS tracks. The priors are Beta distributions, a common choice when the parameter is a probability. The data consists of all the actual transitions seen on all the GPS tracks in the case history.

So their model has three steps:

  • Create prior transition matrices
  • Observe real tracks
  • Calcuate posterior transition matrices

Then you can use these trained matrices to generate your probability maps.

Static vs Motion Models

So far, the SARBayes approach here has been to work directly with the IPP and the find location, disregarding motion. We estimate probabilities based solely on how well the features of a cell match the distribution of find locations in the data. This has some possible advantages:

  • Lost person data gives IPP and Find Location, but not path. Bob Koester says it's usually not feasible to infer the path from those points, and has examples to show this.
  • Subjects don't move much, so it's probably a good estimate of where they are by the time a formal search is underway.

But a motion model has some advantages:

  • Clearly it would be a better model when the subject is still moving: either early in the search, or for some subjects.
  • It might be possible to get or reconstruct GPS tracks from enough lost people. (Either those who have GPS signal but no iPhone on which to show their location and exit path, or those with strong enough clues to have faith in the actual track.)
  • Perhaps most importantly, a good motion model should be very responsive to actual terrain because it considers paths not just point attractiveness.

The right motion model would be preferable in every way, but for now I'm betting on the static model. Why?

First, I'm relying heavily on the assumption that by the time we're searching, most subjects aren't moving much. Otherwise optimal allocation needs to account for subject motion, and requires a motion model. If the assumption is wrong, the only thing saving me is the fact that PODs are so low that we'll revisit a cell anyway. Second, a motion model has to get more things right, including many non-Markov properties like:

  • Tendencies to stop moving after awhile, or at night.
  • Using strategies like route-sampling, backtracking, heading for peaks, etc.
  • Effects that depend on how long you've been going in one direction.

That said, a simple motion model might surprise us and do well enough, so I'm glad they're pursuing it.

Technical Points

They discretize the region into N hexagons. Each hexagon has three features:

  • Topology: [lake, plain, hill]
  • Vegetation: [sparse, medium, dense]
  • Elevation

Transition matrices are defined for each feature separately, and they are assumed independent. Then they make a single NxN transition matrix for propensities to go from each cell to another.

Questions:

1. Why NxN? All we need is Nx7, since each hex has 6 neighbors plus itself. A trivial point perhaps.

2. Data: They only have 1 or 2 tracks so far. But the paper establishes the method.

3. Their parameter correlations mix subject tendencies with environmental descriptions. For example, their negative correlation between V22 (dense to dense) and T11 (plain to plain) happens because for their chosen location,

dense vegetations are mostly located on topology type of hill and medium vegetation are mostly located on topology type of plain.

So we should factor the model to separate terrain correlations from subject tendencies.

4. Bayesian \chi^2? Given a Bayesian approach, it's a little odd to do null hypothesis testing? I think we could reformulate this to get likelihood ratios on alternate hypotheses, or a distribution on parameters.

 

Author: ctwardy

Charles Twardy started the SARBayes project at Monash University in 2000. Work at Monash included SORAL, the Australian Lost Person Behavior Study, AGM-SAR, and Probability Mapper. At George Mason University, he added the MapScore project and related work. More generally, he works on evidence and inference with a special interest in causal models, Bayesian networks, and Bayesian search theory, especially the analysis and prediction of lost person behavior. From 2011-2015, Charles led the DAGGRE & SciCast combinatorial prediction market projects at George Mason University, and has recently joined NTVI Federal as a data scientist supporting the Defense Suicide Prevention Office. Charles received a Dual Ph.D. in History & Philosophy of Science and Cognitive Science from Indiana University, followed by a postdoc in machine learning at Monash.