In the summer of 2006, Rick Toman and Dan O'Connor (NewSAR) organized a sweep width experiment and summit called "Detection in the Berkshires" at Mount Greylock in Massachusetts. The data from that experiment has been used in previous blog posts, but hasn't been published independently.
In the summer of 2006, Rick Toman (Massachusetts State Police) and Dan O'Connor (NewSAR) organized a sweep width experiment and summit called "Detection in the Berkshires" at Mount Greylock in Massachusetts. In addition to the sweep-width experiment, Perkins & Roberts provided search tactics training for several teams, and the summit provided a chance for us to explore similarities and differences between formal search theory and formalized search tactics. It was an important the chance to meet many key people, compare notes, and discuss ideas. I wish I had been more diligent about following up. Many thanks to Rick & Dan for organizing the event, and to many others listed at the end. However, this post is mostly to provide a reference for the sweep width experiment.
The C4ISR Journal had a recent search theory article quoting me along with Larry Stone.
The C4ISR Journal had a recent search theory article quoting me along with Larry Stone. I'm quite honored, like the British company in Dirk Gently's Holistic Detective Agency:
...was the only British software company that could be mentioned in the same sentence as ... Microsoft.... The sentence would probably run along the lines of ‘...unlike ... Microsoft...’ but it was a start.
It's a good article, covering the undeniably exciting historical origins hunting U-boats, and looking at what may be a modern renaissance. I think the article stretches to connect search theory with Big Data, but the author does note that when the data is visual, and you have humans scanning it for objects, there is a connection. With planning, it could have been used to prioritize the Amazon Mechanical Turk search for Jim Gray. (The resolution of the actual images in that search was probably too low regardless, but the core idea was sound.)
OSARA conference link, with highlights from Ken Chiacchia's talk.
When searching for an image for this post, I came across several works by E.B. Banning applying search theory to archaeology:
- Sweep widths and the detection of artifacts in archaeological survey. (2011) [Science Direct]
- Detection functions for archaeological survey (2006). [JSTOR]
- Archaeological Survey (2002 book). [Google books]
Now what would archaeologists be doing with sweep widths? Looking for nails, shards, and other small objects in the soil. What they nicely call "small scatters of generally unobtrusive artifacts on the surface".
In WiSAR we call them clues.
In the previous post, we began to build a theory of detection over time as the result of a very large number of independent glimpses. By assuming the environment to be fixed for awhile, we moved all the environmental factors into a constant (to be measured and tabulated), and simplified the function so it depended only on the range to the target.
In this post we simplify still further, introducing lateral range curves and the sweep width (also known as effective sweep width). We will follow Washburn's Search & Detection, Chapter 2. (So there's nothing new in this post. Just hopefully a clear and accessible presentation.)
We begin a four-part gentle introduction to search theory. Our topic is visual detection of targets by land searchers. Today we summarize Koopman Chapter 3, constructing the useful "inverse cube" detection model by starting from instantaneous glimpses with tiny detection probabilities.
The SARBayes MapScore server has been running for a month now at http://mapscore.sarbayes.org. It's a portal for scoring probability maps, so researchers like us can measure how well we are doing, and see which approaches work best for which situations. Take a look. (And if you have a model, register and start testing it!)
Don Ferguson just sent me an update on the MapSAR project -- he's presenting at a project meeting this week in the Grand Canyon. I'm blown away by his slides. They've got it: a GIS enabled search planning tool with a foundation in search theory. They've even got tools for various kinds of probability maps, and POD models. I'd only been following this peripherally. That has to change. I've just signed up for the various groups and can't wait to test the software.
Lin & Goodrich at Brigham Young are working on Bayesian motion models for generating probability maps. They have an interesting model, but need GPS tracks to train it. It's a nice complement to our approach, and it will be interesting to see how they compare.
~Originally a very cool review published in the first half of 2010. The review led to phone calls and a very productive collaboration on MapScore and other work.
Partly reconstructed March 2012.