Comments on O'Connor 2004

Comments on the 2004 draft of "Controversial Topics in Inland SAR Planning" by Dan O'Connor. Written in 2004, updated in 2007.

Preface

This open letter was written in 2004.  At the time, some people found it useful.  In 2007, I updated it as a blog post. It was lost in the ISP crash of Oct. 2011.  In Feb. 2013, I exhumed it and posted to this WordPress site. I finally fixed the formatting in Aug. 2014. As of Feb. 2013, the Internet Archive still had the original 2004 post and the updated 2007 post. In my 2007 update, I wrote this preface:

Things have moved on a bit since then [2004]. In the summer of 2006, Rick Toman and Dan O'Connor organized sweep width experiments in the Berkshires, and provided a "Camp David" style setting for ISAR and MSAR folks to meet. (Unfortunately, it shamefully took me a over a year to finish analyzing that and get the spreadsheet back to Rick. I think I still haven't posted it here. [See Berkshires 2006 Sweep Width Experiment].) As I note below, there have been developments in adding MSAR ideas to new versions of standard of SAR software. But, as an exercise in blogging, here is the original post again, with minimal edits to get it displayed properly here.

Notes on "Controversial Topics in Inland SAR Planning", Draft Feb. 2004

Introduction

Daniel O'Connor submitted a white paper for commentary. The paper is titled "Controversial Topics in Inland SAR Planning". The February 2004 draft is available from:

These are my comments on that draft.

The paper argues that Inland SAR (ISAR) is not just a makeshift beast, but "in the family tree spawned by Koopman," and that ISAR methods are"adaptations to the practical realities of moving from the two-dimensional plane of the ocean to the multi-dimensional land search." Daniel O'Connor argues that some of the techniques from Maritime SAR (MSAR) are "flatly [no pun intended?] inappropriate".

I very much admire this effort and think the final version will play a valuable role in the discussion. I think, however, that this draft overstates the case, and hope my comments will be helpful. I apologize for the critical nature of many comments --- it is easier to notice things I disagree with.

Perhaps O'Connor takes MSAR more narrowly than I do. It sometimes seems that he sees MSAR as wedded to current drift models and the specifics of looking for distressed boats. Clearly he thinks MSAR is wedded to grid searching techniques. No wonder he's suspicious of it!

Rather, I take MSAR to be mathematical search theory more generally, which provides optimal resource allocation given an POA map and detection profiles. How you update POA in absence of search is of course task dependent. In MSAR it is current and leeway. In ISAR, it will be wandering.

I often wish Jack Frost would write shorter comments. Sadly, I find I've written a long batch. So here is the quick summary:

  • Quibbles about terminology. Be more careful with "sweep width", "lateral range", etc.
  • Don't conflate MSAR with particular practices in the special setting of ocean search. I think some of the criticisms of MSAR go off-target because of this.
  • Particularly glaring: O'Connor suggests that MSAR emphasizes thoroughness over efficiency. This is a severe misunderstanding.
  • A better approach is to say that MSAR is often unnecessary, and that ISAR practice has hit upon techniques which usually get good enough. I suspect this is true. But it doesn't mean there is something wrong with MSAR theory, NOR does it mean that the theories given for the ISAR practices are correct.
  • Similarly, ISAR uses dogs etc., and MSAR has no good models yet. By all means, improve practice in the field, because it might take theory awhile to be useful. But beware of hanging on to poor theory just because practice is good.

Detailed comments below, grouped by sections of the paper.

Open Systems vs. Closed Systems

I think open and closed systems are equivalent. So O'Connor is right to argue that ISAR can keep ROW if it wants. However, advocating that we need ROW and open systems is a red herring. One may be more convenient, but they are both equivalent.

It is trivial to go back and forth between open and closed systems. If you start with a closed system, and later discover that you were wrong, you have to expand your closed system. So you re-scale the initial POA estimates, and update from there. This takes 3 steps. (1,p.9) So "closed" systems can do what "open" systems do. They don't have to have all areas pre-defined.

On the other hand, the extreme case of open system is that you start with 1 area: ROW. Then you subdivide it into Here and ROW, and search Here. With more resources or time, you can subdivide ROW again, just as you can divide Here. So ROW is really just a convenient way to make your open system a closed system.

Also, if you initially left 5% to ROW, and you start expanding your search area on the boundaries, should you really confine yourself to 5%? No. Usually you expand not because you have so thoroughly searched the initial areas, but because you have a new scenario. That means you are re-estimating your initial POA. Just like for a closed system.

ROW does provide a convenient way to track search success: when ROW has a POA greater than the rest of the search, it may be time to go home. But if ROW starts at 5%, that's equivalent to considering home when POS is 95%. Just as convenient.

I'd like to make a few other points relevant to the table on p.8:

  • Clues: I don't see why one system is more conducive to clues. Neither has a good theory, but the one used by both (so far as Iknow) was suggested by (2,p.36): give a measure of credence/confidence the clue is associated with the target, and then some estimate of what it would mean for each area if that were true.
  • POA changes over time, even if no search: If the subject is moving, it also changes in land SAR.
  • Non-contiguous segments: either system can handle them.At worst, you just have a map of the whole country, and most cells have probability 0. No reason you can't have a bimodal probability distribution. And you could have area 1 and area 2 represent non-contiguous areas as long as you didn't try to do drift or distance calculations. Existing MSAR software might not be set up for this, but that's a different matter. I have a library of MSAR routines for land sar, and they don't assume areas are contiguous. They leave that to the mapping part of a program.
  • Criminal investigations:
    • Piracy is still a problem, so the coast guard must have some idea of this.
    • Even if they don't, or don't treat it as a search problem, why should MSAR be unable to handle it? Again, sure, CASP can't, but that's just one software package.
  • TDA support: There are other TDAs. Commented list at http://sarbayes.org/links.shtml#software. [Link removed because it's not there anymore, and would be outdated by now anyway. -crt 2013]

Grid searching and manpower requirements

The comparative example: the point of course is that grid searching is unrealistic in ISAR. Yes. And MSAR techniques generally presume grid searching. Yes, they have. And although there is no reason they must (I can model hasty* searches fine), there is also less reason to use them until grid searching is necessary. It's not a limit of theory, it's a limit of pragmatics: why bother dragging out the computer if you're still doing hasty searches? POD on a hasty search is near 100% (along the feature only), and the darned subject might wander back onto them [the linear feature] after search anyway. MSAR will only be useful here when software can start quickly, has GIS maps with paths noted, and ideally, has moving-person models. (But then, the same complaints have been made about invoking ISAR theory at this stage.)

A couple of notes on the comparison though.

Dan, your MSAR example is unrealistic if the search object is really a vessel. MSAR would never assign such a tight track spacing, because it makes the coverage absurdly high, and wastes effort. But if the target were a person in the water, you would have a coverage of 0.8 and a POD of 67% (in ideal search conditions), because the measured sweep width of a fixed-wing aircraft for a person in the water is 0.1 NM. Numbers from (3, appendix I, esp. Table I-5 and Figure I-12.).  The same search over clear flat land would also waste effort, because the sweep width is 0.5, giving a coverage of 4, and a POD above 98%. Of course, that drops quickly as you add vegetation, hills, haze, etc.)

Now your comparison goes through: grid searching large areas is not feasible for grunts. But then, ISAR rarely has to search 100 square miles. If we did, your calculations suggest we should use aircraft. Consider: even in poor visibility and dense vegetation, it looks like 16 aircraft could get a 70% POD in those 5 hours.(note 1) Granted, the correction factors in IAMSAR may not be severe enough, but clearly if we had to search such large areas we would use other resources. And we do.

But the fairer comparison uses our real initial POA map, because if we're dealing with ground searchers, we probably aren't covering such large areas. (Or if we are, as in the [U.S. Space] shuttle [Columbia] search, we accept that it's going to be a lot of people for a long time.)

Our initial POA map tells us our subjects will be close in, or on/near navigational aids. Likewise, MSAR initial POA maps also constrain the area to far less than a "theoretical search area" given a boat travelling at cruise speed!

Given a POA map and a set of resources, MSAR is applicable. However, one could argue that in most land cases, it is unnecessary. Unnecessary because we don't need a computer to track POS when our POD along paths is 100% and our initial search area is so constrained that we can cover it quickly. But then, that's just when we don't need CASIE either.

POA vs POD

No problems with the first page. Definitely POD does not overrule POA. Likewise, your conclusion not to use priority schemes that bias towards POD over POA. Have to use both. (By the way, what schemes emphasize POD over POA?)

On the second page you make a stronger claim. As applied to God-given POA and POD values, it does not make sense, but I think you are applying it to normal estimates, esp. with current practice. You should make that clearer. Your claim is: given choices of two assignments with equal POS, we should always prefer the one that had higher POA. I think I see your point. I think you have one good reason and one bad one. Also, since you have not mentioned search time, I'm presuming them equal. So let's look at your two reasons.

  1. Deploy to the high POA because we want to "find the subject as quickly as possible, not sweep the entire search area as quickly as possible." But if they have the same POS, and take the same length of time to search, they both find the subject as quickly as possible. No difference. On the other hand, if for equal POS, one area took less time to search, that is the one to search first, whichever it is.
  2. Deploy to the high POA because POD is "likely to be higher than its calibrated value." The idea is that POD estimates are uncertain, and that low POD estimates (like this one, 10%) are probably underestimates, so the POS here should be higher. Given that high POD estimates are probably overestimates, the POS in the other region should be lower, further backing your decision. I'm suspicious of your defense in terms of chance favoring detection. Chance could equally favor undetection. But human judgements of probabilities are known to display overconfidence: high values are too high, and low values are too low. So that is a much stronger argument that does not rely on chance being providential. The weak point is that POA judgements could be equally affected. Here you might assert expert judgement or experience to say that POA estimates are generally more accurate. I think they're messy, but I know POD estimates are messy. That's been established many times. So you can make a case.

Tactical Decision Aids

Note: Things have happened since I wrote this in 2004. Most notably, in 2006 & 2007, O'Connor & Lovelock (and probably others) have created a Windows version of CASIE (WinCASIE III), that incorporates some optimal resource allocation algorithms, including one from Washburn. Also, Search Manager is moving in that direction, and has some decent GIS support. --crt [2007]

Actually, the reason MSAR proponents don't advocate CASIE or Search Manager is that the resource allocation algorithms are notably sub-optimal. The one in CASIE wouldn't be too bad if the POD values could be trusted. But since the ones used aren't related to effort expended, they are suspect. Not entirely CASIE's fault. If they could be trusted, and you had enough computer time, CASIE would eventually find the best solution available that didn't split (or combine?) resources.

Using stuff from Washburn, you can do better. Either you can allow splits and get truly optimal OPOS, or you can use that true optimum to greatly speed the search for the best CASIE-style optimum. CASIE uses a brute-force search. Search Manager used to use some very dodgy hacks for search priority. I know Martin put a lot of work into revising those, but I haven't looked at the new version. My fault, I should.

Dan: who has been advocating going back to pencil-and-paper? I get the feeling we're not talking to the same MSAR people!

Mattson Consensus vs. Initial Distribution

If the initial probabilities had to be locked down, you might be right here. But as noted above, and in (1,2), they can easily be revised on the fly as you get new scenarios, new experts, etc.

My ideal system would do this:

  1. You open map and put in PLS.
  2. You choose subject category. Enter whatever data you have.
  3. You're the only one on-scene. You do a quick scenario and put your own area estimates in, based on route.
  4. Using terrain/veg features from the map, and subj profile,program gives an initial distribution based on past data, either from straight statistics or doing a motion model over the terrain (see Hugh Round's web page for that).
  5. Program weights the two according to your prefs.
  6. Later you learn more about the subject (or terrain), or get another expert to give their subjective map. Program revises this initial POA map, and then re-applies any POD results.
  7. You get a clue. Same as above: learning more about subject,plans, terrain are all clues too. Footprints may require search manager to estimate impact of clue on existing areas.

That is all part of MSAR. False sense of precision: yes, always a risk in using computer models. We can ameliorate it by entertaining and tracking multiple scenarios, and can track uncertainties, but it is always a problem when you run a computer model. Same argument applies to using CASIE though.

Quibbles (but some of consequence)

p.4: "32% of the defined SA has been cleared"
Hmm. That's not clear. Sounds too much like saying "thispart over here is cleared, that part isn't". Also, it's notthat 32% of the original POA has been "swept up". 80% of that.Maybe better to say POS=32%, meaning that of our original 40 units, we've accounted for 32, leaving 8. (I don't know if it's clearer, but you could also do it the other way. Since, the original probability of missing the subject was 20%, after searching, there is a .2 x .4 = 8% chance the subject is still there.)
p.4: "With the exception...."
True. But then, when it does, it uses the same theory. A beautiful account in the popular book Blind Man's Bluff.
p.9: "decides to use a sweep width of 1/8th of a mile"
You'll need to be careful of terms here. Hopefully Jack reads this before replying. I think I'll be briefer. 🙂 In MSAR, "sweep width" is not something you can decide. You should use [i.e. write] "track spacing" here instead. Likewise, "lateral range" is not just half the track width, so don't use that term, even later when describing ISAR grids. The phrase "POD or coverage" suggests they are identical. Maybe "high coverage (hence high POD)".
several: You say MSAR is devoid of clues.
But it is not. Debris fields, fuel slicks, even occasional floating things are tracked, noted, and used. So are radio contacts. [And see the (in)famous paper, "The flaming datum problem".]

References (more complete available on request)

  1. Cooper & Frost 2000. Selected Inland SearchDefinitions. Advanced Rescue Technology, http://www.advancedrt.com/articles/SelectedDefsDCC.pdf
  2. Cooper 2000. The application of search theory to land search:adjustment of Probability of Area (POA). Advanced RescueTechnology, http://www.advancedrt.com//articles/AdjPOADCC.pdf.
  3. Richardson & Corwin 1980. Computer-assisted search. In Haley & Stone,Search Theory & Applications.
  4. AusSAR manual, based on the IAMSARmanual. Available in sections or entire from: http://www.amsa.gov.au/natsar/Manuals/Search_and_Rescue_Manual/Index.asp.

 


Footnotes

[*hasty: A "hasty" search used to be defined as a fast two-person search by trained searchers along trails and other linear features which had both high POA and high POD. However, because the term suggests carelessness, these have been incorporated into the initial "Reflex Tasks" to be undertaken as soon as the search begins. -crt 2014]

Note 1: Here, in addition to the correction factors for bad weather etc., I have used the exponential curve for "poor" detection, which should be applied whenever navigation is not perfect, or generally conditions are in any way bad. It's lower than the main curve in Figure I-12. The corresponding chart in IAMSAR gives it. So do many of Frost's publications. That's the main point of this footnote.

For mathies, the "poor" value is easy to compute: [ 1 -exp(-C) ], where C is coverage. In this case, coverage is easy to compute. It's just sweep width divided by spacing. Sweep with is 0.5 NM times a bunch of correction factors making it 0.02. Spacing is 1/8 NM. So we get C = 0.02 x 8 = 0.16.

That gives POD = 0.14. But if we had 8 planes, we multiply coverage by 8. Then we have [ 1 - exp(-1.28) = 72% ].

You're still reading this footnote? Wow. You must be really interested. Well, here, let's spot a slight fudge above. If I'm using the "poor" search curve, it's likely that navigation is not perfect. So in theory I shouldn't use the "easy" formula for coverage, since our tracks may be wobbly. But I don't think it's going to matter for this example. Please, go back to the text now.

 

Author: ctwardy

Charles Twardy started the SARBayes project at Monash University in 2000. Work at Monash included SORAL, the Australian Lost Person Behavior Study, AGM-SAR, and Probability Mapper. At George Mason University, he added the MapScore project and related work. More generally, he works on evidence and inference with a special interest in causal models, Bayesian networks, and Bayesian search theory, especially the analysis and prediction of lost person behavior. From 2011-2015, Charles led the DAGGRE & SciCast combinatorial prediction market projects at George Mason University, and has recently joined NTVI Federal as a data scientist supporting the Defense Suicide Prevention Office. Charles received a Dual Ph.D. in History & Philosophy of Science and Cognitive Science from Indiana University, followed by a postdoc in machine learning at Monash.

Leave a Reply

Your email address will not be published. Required fields are marked *