An abstract just crossed my desk that I'd love to share. Briefly, adding meaningless math to your academic paper inordinately impresses humanities PhDs. The author does not say whether this also works in pickup lines, so there's room for follow-on research.
Posted in Links
Tagged humor, math
Just a quick note to highlight Paul Doherty's new research page. It includes:
- Overview of his research
- Publications list
- Software & Datasets page, including links to MapSAR and discussion groups.
- Linkspage with a SAR & GIS bibliography including the memorably titled
- Heggie, Travis W, and Michael E Amundson. 2009. “Dead Men Walking: Search and Rescue in US National Parks.” Wilderness & Environmental Medicine.
- And the humorously mangled: Is, Information, Releasable To, and Foreign Nationals. “Search and Rescue Optimal Planning System ( SAROPS ).” Training 2.
- And three articles it sounds like I should read soon:
- Jobe, T.R., and P.S. White. 2009. “A New Cost-distance Model for Human Accessibility and an Evaluation of Accessibility Bias in Permanent Vegetation Plots in Great Smoky Mountains National Park , USA.” Journal of Vegetation Science: 1099–1109.
- Miller, Harvey J., and Scott a. Bridwell. 2009. “A Field-Based Theory for Time Geography.” Annals of the Association of American Geographers 99 (1) (January 8): 49–75. link
- Pingel, Thomas J. 2011. “Estimating an Empirical Hiking Function from GPS Data.” Sports Medicine: 1–3.
At Mason we're collaborating with Paul to test a Watershed-Distance model developed by his research group. Based on 58 tests run so far by Elena Sava on MapScore, this simple model scores 0.55. Not bad for a model that doesn't yet discriminate by category (or any other feature). Elena just finished a multivariate model combining Watersheds with the more usual crows'-flight distance, and we will begin testing that soon.
In the previous post, we began to build a theory of detection over time as the result of a very large number of independent glimpses. By assuming the environment to be fixed for awhile, we moved all the environmental factors into a constant (to be measured and tabulated), and simplified the function so it depended only on the range to the target.
In this post we simplify still further, introducing lateral range curves and the sweep width (also known as effective sweep width). We will follow Washburn's Search & Detection, Chapter 2. (So there's nothing new in this post. Just hopefully a clear and accessible presentation.)
We begin a four-part gentle introduction to search theory. Our topic is visual detection of targets by land searchers. Today we summarize Koopman Chapter 3, constructing the useful "inverse cube" detection model by starting from instantaneous glimpses with tiny detection probabilities.
The SARBayes MapScore server has been running for a month now at http://mapscore.sarbayes.org. It's a portal for scoring probability maps, so researchers like us can measure how well we are doing, and see which approaches work best for which situations. Take a look. (And if you have a model, register and start testing it!)
Don Ferguson just sent me an update on the MapSAR project -- he's presenting at a project meeting this week in the Grand Canyon. I'm blown away by his slides. They've got it: a GIS enabled search planning tool with a foundation in search theory. They've even got tools for various kinds of probability maps, and POD models. I'd only been following this peripherally. That has to change. I've just signed up for the various groups and can't wait to test the software.
Lin & Goodrich at Brigham Young are working on Bayesian motion models for generating probability maps. They have an interesting model, but need GPS tracks to train it. It's a nice complement to our approach, and it will be interesting to see how they compare.
~Originally a very cool review published in the first half of 2010. The review led to phone calls and a very productive collaboration on MapScore and other work.
Partly reconstructed March 2012.
Syrotuck's main study is his 1976, with N=242. But he gives much more detail about distance travelled in his 1975 paper, breaking distance down every 0.2 miles. Unfortunately, he only reports probabilities, not numbers, and doesn't even report total N. We know he got more data between 1975 and 1976, but didn't know how much. Is the 1975 breakdown representative of the 1976 data? Unfortunately, no one has Syrotuck's original data. But we re-created it. (Spreadsheets available!)
How to turn a USGS map or satellite photos into vector data? A couple of links, and some questions.
Back when Adam Golding and I prototyped Probability Mapper we already had an algorithm that could give the probability based on distance, terrain, vegetation, and other factors. But we were working with raster images, so we only had distance. If you had vector layers for terrain & vegetation, you'd be set.
~Originally 9 June 2008