Bluefin-21 Analysis

One of our SciCast forecasters posted an excellent analysis of how he estimated the (remaining) chance of success for Bluefin-21 finding MH370 by the end of the question.

Forecast trend for Bluefin-21 success, on SciCast.

Forecast trend for Bluefin-21 success, on SciCast.

Jkominek was wondering why the probability kept jumping up, and created a Bayes Net to argue that there was no good estimation reason for it.  (There may be good market reasons -- cashing in to use your points elsewhere.)

Bayes net model created by jkominek to explore the Bluefin question.

Bayes net model created by jkominek to explore the Bluefin question.


The full blog post is here:


Posted in Search Theory | Leave a comment

Incoherence & Mattson

The following figure is from a recent paper I co-authored*:

Figure from Karvetski et al. 2014 showing we get more accuracy by ignoring incoherent estimates than by simple unweighted averages.

Figure from Karvetski et al. 2014 showing we get more accuracy by ignoring incoherent estimates than by simple unweighted averages.  (The unfortunately abbreviated 'BS' means 'Brier Score'. Lower is better, with 0 being perfect.)

What implications does it have for making subjective "consensus" probability maps at the start of a search?

Continue reading

Posted in Search Theory | Tagged , , , , , | 5 Comments

UAVs, MH370, Prediction Markets

In which I discuss two-and-a-half approaches to crowdsourcing Search & Rescue, and invite you to try one -- namely, mine.  (SciCast)

Continue reading

Posted in Search Theory | Tagged , , , , , , , , , | 3 Comments

NBC29 WVIR Charlottesville, VA News, Sports and Weather

Short interview showcasing GIS for search and rescue.  The reporter only identifies the system as "ArcGIS" but I believe it is Don Ferguson's IGT4SAR.


Posted on by ctwardy | Comments Off

One of the giants in Bayesian statistics passed away at his home earlier this week, coincidentally at the start of the O'Bayes 250 conference marking the 250th anniversary of the publication of Bayes' paper. Writeup in X'ian's 'Og.

Posted on by ctwardy | Comments Off

Berkshires 2006 Sweep Width Experiment

In the summer of 2006, Rick Toman (Massachusetts State Police) and Dan O'Connor (NewSAR) organized a sweep width experiment and summit called "Detection in the Berkshires" at Mount Greylock in Massachusetts.  In addition to the sweep-width experiment, Perkins & Roberts provided search tactics training for several teams, and the summit provided a chance for us to explore similarities and differences between formal search theory and formalized search tactics.  It was an important the chance to meet many key people, compare notes, and discuss ideas.  I wish I had been more diligent about following up. Many thanks to Rick & Dan for organizing the event, and to many others listed at the end. However, this post is mostly to provide a reference for the sweep width experiment.

The data from that sweep width experiment has been used in previous blog posts and in a new article in Wilderness & Environmental Medicine, but hasn't been published independently.

Detection in the Berkshires. Back row: Andy Petrie, Bob Rando, George Rice, Ken Hill, Pete Roberts, Charles Twardy, Dave Perkins, Rick Toman Seated: Dan O’Connor, Joe Hess. Not shown: Jack Frost, Art Allen

Detection in the Berkshires. Back row: Andy Petrie, Bob Rando, George Rice, Ken Hill, Pete Roberts, Charles Twardy, Dave Perkins, Rick Toman
Seated: Dan O’Connor, Joe Hess.
Not shown: Jack Frost, Art Allen


Thirty-four searchers from several Massachusetts SAR groups ran the course which had 15-17 of each type of search object, yielding between 470 and 580 total detection opportunities per object. The main results are summarized in the following table:

Adult Hi-Vis 39m 40m
Clue Hi-Vis 16m 22m
Clue Lo-Vis 9m 11m

It's also interesting to see the raw POD data. Note that the experimental design spaces objects according to their AMDR, so as to keep PODs in the middle ranges. We need both detections and misses to measure detectability!  Averaging over the 34 searchers, we find:

  • Adult Hi-Vis: 44%   (24%-59%)
  • Clue Hi-Vis  : 47%   (21%-64%)
  • Clue Lo-Vis : 37%   (13%-73%)

Almost all objects could be detected if you knew where to look, but by design some were placed out near the limits of detectability. The conditions were otherwise ideal.  Searchers who did not know where to look found just under half the high-visibility targets, and more like one third of the low-visibility ones.

Sweep Width Experiment

Charles Twardy, Jack Frost (USCG), and Art Allen (USCG) set up an ESW (Effective Sweep Width) experiment following the procedure in Koester et al. (2004).  We deviated from that procedure in a few ways:

  • We used only one adult target: the high-visibility adult manikin.
  • Our clues targets were not colored gloves but assorted shoes donated by New Balance: brown-or-black shoes for low-visibility clues, and white shoes for high-visibility clues.

More tbd...


Adult hi-vis manikin on the trail.
Adult hi-vis manikin on the trail.
Hi-vis clue (white sneaker) among vegetation.
Hi-vis clue (white sneaker) among vegetation.
Low-vis clue (brown shoe) amid vegetation.
Low-vis clue (brown shoe) amid vegetation.


The IDEA spreadsheet with all the data that has been entered so far is mostly in this sheet:


However, stand by for updates: in preparing this post I realized that some AMDR measurements were done on a different version of the spreadsheet.  This section will be revised....


It's worth posting an observation by Ken Hill.  He served as a data logger for one searcher, and decided that while logging he couldn't really do more than look at one side of the course.  The searcher of course was looking at both sides.  Ken ended up detecting slightly more objects than the searcher, and being a psychologist, suggested that we are not actually perceiving anything while swiveling our head, and might do better to search in pairs, with one person looking left and the other looking right.  I don't know if anyone has followed up on this idea, but this kind of assigned "sector" searching is common on aircraft and, I am told, in military ground patrols.  There is plenty of data on saccadic masking and change blindness to suggest we try Ken's idea.


I'm going to crib from Rick Toman's thank-you email to all participants.  It reminds me there were many more people involved in making this happen. To hit just the highlights:

  • Lt. Scibelli and other EPO's from Massachusetts State Police
  • Massachusetts SERT participated in both the ESW and tactical training
  • Civil Air Patrol participated and helped with organization.
  • Central Mass. SAR Team participated and helped
  • Camp Mohawk for the Friday venue and quarters
  • George Rice from NASAR and NJ SAR provided a management perspective
  • Ranger Joe Hess from New York participated, engaged in several discussions, and helped clean up the course.
  • Robb Grace of Berkshire Mountain SAR assisted with data collection
  • DCR Ranger Bob Rando and his staff facilitated the whole event and provided valuable assistance throughout.
  • State Police Tactical Ops provided extra support
  • Andie Petrie, EPO MIS Section Chief, provided maps, data management, and a mobile command post, and a great can-do attitude.
Posted in Search Theory | Tagged , , , , | Leave a comment

MapScore Updates

The MapScore site has been updated! The most exciting new feature is Batch Upload.

  • Batch Upload!  In one fell swoop, upload and re-score all your models and cases (or as many as necessary).  No more clicking around or messing with “active” vs “inactive” cases.  (Unless you want to...)
MapScore batch upload screen. New Nov. 2013.

MapScore batch upload screen. New Nov. 2013.

  • Corollary: you can now redo any case.  New values simply replace old values.  The old system was too rigid.  Modelers are developing as they test, and often discover glitches in the model or data layers.
  • Upload now accepts all RGB and Grayscale PNG images, using the Python Imaging Library to convert them to grayscale before scoring.  (Of course, there is no guarantee that the conversion preserves the rankings of your idiosyncratic full-color heatmap, but that’s not really our problem is it?  J )
  • Leaderboard: fixed averages & confidence intervals, improved formatting.

There are various and sundry small fixes in this a previously un-announced update.  For example:

  • Cleaner login:MapScore login
  • Upgraded to a production webserver & virtual machine.
  • Refactored a bunch of back-end code. Of course.

Elena is now running batches of “Updated” versions of the base models (Distance, Watershed, Combined).  We discovered glitches in the way ArcGIS was exporting PNG files. For now we are leaving the old models up for comparison, but will eventually just replace them.



  • SysAdmin extraordinaire Nick Clark for the core upgrades.
  • Summer high-school intern Hardhik Nadella for streamlined HTML.
  • Elena Sava (now a graduate student!) & Mukul Sonwalkar for improvements to the baseline models.
  • 2013 DHS SBIR award to dbS Productions, LLC to support work batch upload.
  • 2011 NSF Research Experience for Undergraduates award to Brigham Young University to create MapScore in the first place.
  • IARPA & Mason: Day-to-day funding for MapScore comes from (a small portion of) the overhead funds from my day job, the SciCast Science & Technology forecasting project.
Posted in MapScore: A Portal for Scoring Probability Maps | Tagged , , | Leave a comment

IGT4SAR post

Article by Don Ferguson on the WiSAR and GIS blog. Nice overview of search planning done in GIS.

Stay tuned for some MapScore updates.

Posted in Search Theory | Leave a comment

Search Theory in C4ISR Journal

The C4ISR Journal had a recent search theory article quoting me along with Larry Stone.  I'm quite honored, like the British company in Dirk Gently's Holistic Detective Agency:

...was the only British software company that could be mentioned in the same sentence as ... Microsoft.... The sentence would probably run along the lines of ‘...unlike ... Microsoft...’ but it was a start.

It's a good article, covering the undeniably exciting historical origins hunting U-boats, and looking at what may be a modern renaissance.  I think the article stretches to connect search theory with Big Data, but the author does note that when the data is visual, and you have humans scanning it for objects, there is a connection.  With planning, it could have been used to prioritize the Amazon Mechanical Turk search for Jim Gray.  (The resolution of the actual images in that search was probably too low regardless, but the core idea was sound.)

Around 2003 I did hear a presentation taking the first steps towards adapting search theory to graph search (i.e. social networks).  The authors proved some interesting theorems, but they admitted it was only a first step.  In my opinion that work was the best that day (including ours), but I don't think the funding agency was interested in theorems. I'm not sure if the work has continued.

NB: The line about my work with JIEDDO is slightly misleading.  It says, "Some of that research included trying to locate insurgents planting IEDs."  Some of JIEDDO's research no doubt did, but we at Mason concentrated more on models to find already implanted IEDs, or likely IED locations.  We used stock search theory to improve other models and simulations.  And while we deeply appreciated funding for Bayesian inference, we don't have much visibility into whether it got used.  (One of our analysts deployed, but he concentrated on the gains to be had by coordination.)

Anyway, happy for the mention, and happier still for possible broader interest in search theory.

PS: On that note, I've just heard that Chris Long is resurrecting the William Syrotuck search theory symposium at or alongside the 2014 NASAR conference!

Posted in Search Theory | Tagged , , , , , , , | 1 Comment

Structured Methods for Intelligence Analysis

My colleagues just published a paper in Euro Journal on Decision Processes, for their special issue on risk management.

Karvetski, C.W, Olson, K.C., Gantz, D.T., Cross, G.A., "Structuring and analyzing competing hypotheses with Bayesian networks for intelligence analysis". EURO Journal on Decision Processes, Special Issue on Risk Management:

Alas, it's behind a paywall and the printed edition isn't due until Autumn. Here's an excerpt from the abstract:

Although ACH aims at reducing confirmation bias, as typically implemented, it can fall short in diagramming the relationships between hypotheses and items of evidence, determining where assumptions fit into the modeling framework, and providing a suitable model for ‘‘what-if’’ sensitivity analysis. This paper describes a facilitated process that uses Bayesian networks to (1) provide a clear probabilistic characterization of the uncertainty associated with competing hypotheses, and (2) prioritize information gathering among the remaining unknowns. We illustrate the process using the 1984 Rajneeshee bioterror attack in The Dalles, Oregon, USA.

I've seen some very good demonstrations of ACH, but when all is said and done, the ACH matrix is a rough approximation to Bayes, justified because it is faster or more intuitive.  But in fact it requires just as many judgments.  Consider this passage from their conclusion:

Although a Bayesian network is a more sophisticated model than ACH, it can be less tedious by eliminating repeated elicitations after partitioning hypotheses into multiple dimensions and focusing on local relationships between variables. With ACH, 121 inputs were needed to define the model in Table 2, whereas 118 conditional probabilities were needed in Tables 3 and 4 to define the Bayesian network.

And when you're done, the Bayes net can perform instantaneous what-if calculations, and update probabilities as evidence becomes available.  (And should you happen to have a combinatorial prediction market, you can crowdsource the probabilities in a distributed fashion.  But that's our other research project.)

The paper sets out a hypothetical analysis (with a real historical case).  Data collection is ongoing.   Their facilitated method has been tried on two groups of analysts and recently on a large (>100) group of students, all with good success.

Don Ferguson has sparked recent listserv discussion on scenario analysis and structured analytic techniques.  I think ACH is pretty good at what it does, and I think it's usually better than informal analysis.  But I think SAR planning would do better to make any structured scenario analysis fully Bayesian.  That's been a part of formal search theory since at least ~1970.


Posted in Search Theory | Tagged , , , , , , , , | Leave a comment