USCG wants a portable infrared WiSAR detector. This RFI was posted on 2-OCT:
The Coast Guard Research and Development Center (RDC) is conducting market research to identify technologies that are suitable for conducting IR searches on foot for persons on frozen waterways. The parameters include detection capabilities of one mile, and recognition capabilities at one-half mile, and identification at approximately one-quarter mile by personnel on foot (monopod is possible). The parameters also include the need to function in extremely cold temperatures, be temporarily submersible, and function regardless of weather conditions or the time of day/night for IR detection.
A logical extension of the Distance Rings model is to fit a smooth function to the distribution of data found in ISRID. Examining the Euclidean Distance data for different categories, it was found that a lognormal curve roughly captured the shape of the data. The Log-Normal (LN) is a two parameter distribution which assumes that the logarithm of your data follows a normal distribution. The probability density function of the LN curve is given by, where are the mean and standard deviation of the logarithm of distance.
Thanks very much to summer intern Jonathan Lee (@jonathanlee1) for many MapScore fixes. Jonathan is a keen Python programmer with extra geek points for running Linux on his Macbook Air and having an ASCII-art avatar. He learned his way around Django in no time and brought us a slew of features and code refactoring including: Continue reading →
One of our SciCast forecasters posted an excellent analysis of how he estimated the (remaining) chance of success for Bluefin-21 finding MH370 by the end of the question.
Forecast trend for Bluefin-21 success, on SciCast.
Jkominek was wondering why the probability kept jumping up, and created a Bayes Net to argue that there was no good estimation reason for it. (There may be good market reasons -- cashing in to use your points elsewhere.)
Bayes net model created by jkominek to explore the Bluefin question.
The following figure is from a recent paper I co-authored*:
Figure from Karvetski et al. 2014 showing we get more accuracy by ignoring incoherent estimates than by simple unweighted averages. (The unfortunately abbreviated 'BS' means 'Brier Score'. Lower is better, with 0 being perfect.)
What implications does it have for making subjective "consensus" probability maps at the start of a search?
One of the giants in Bayesian statistics passed away at his home earlier this week, coincidentally at the start of the O'Bayes 250 conference marking the 250th anniversary of the publication of Bayes' paper. Writeup in X'ian's 'Og.
In the summer of 2006, Rick Toman (Massachusetts State Police) and Dan O'Connor (NewSAR) organized a sweep width experiment and summit called "Detection in the Berkshires" at Mount Greylock in Massachusetts. In addition to the sweep-width experiment, Perkins & Roberts provided search tactics training for several teams, and the summit provided a chance for us to explore similarities and differences between formal search theory and formalized search tactics. It was an important the chance to meet many key people, compare notes, and discuss ideas. I wish I had been more diligent about following up. Many thanks to Rick & Dan for organizing the event, and to many others listed at the end. However, this post is mostly to provide a reference for the sweep width experiment.