1pm Session: Human Factors
In this session there were a couple forensic examiners in attendance as well as several researchers to discuss how human play into introducing statistics into forensics. The forensic methods discussed included autopsy, fingerprinting, and DNA analysis to name a few. Several statistical methods were tied in with the forensic analysis methods like conditional probability, likelyhood, and bias. If I were to choose which project from this group I were to work on I would choos Human Factors in Visual Identification: A Cross-Cutting Research Proposal. This project takes into account a person’s past experiences and tests how they visually interpret an image. The results are then used to train an algorithm to determine a probability of a person making a particular decision given a situation. This project is essentially quantifying a person based on past experiences and I find that fascinating. In the future I would like to see this algorithm tested against a jury to see what decision they would make, but that would need to account for many more variables and work take time to get that complex. One of the most interesting things I got from this session would be that not everyone agrees that statistics should be implemented into forensics. The forensic examiners disagreed that evidence should be examined objectively and argued that subjective analysis is sufficient. I believe that a subjective analysis should be used as a hypotesis and supported with statistics to be more factual and accurate.
3:15pm Session: Firearms/Handwriting
There were several forensic methods discussed like handwriting analysis, 3-D imaging of bullets, and matching bullets from striae. Statistical methods that were used along with the forensic methods were conditional probability, likelyhood ratio, and evaluating errors based on examiners subjectivity. If I were given the chance I would want to work on the Statistical and Algorithmic Approaches to Matching Bullets. It has had some recent success with bullets separate from the study the algorithm was trained on and seems to have promise. In the future I think that the project would perform better with a larger sample size to make for more accurate probabilities. The most intriguing thing I learned in the session would be how algorithm are made. I thought they were the equivalent of computer voodoo, but they are actually programed based on collected data. This means that the algorithm is only as good as the data it’s based on.
Poster Session: Automatic Matching of Bullet Lands
I spoke with Heike Hoffmann from Iowa State University about her bullet matching project. The project sought to determine “Were two bullets fired from the same gun barrel?” The reported result was that the project had created an algorithm that gave a probability that two bullets matched. If the probability was high then the bullets were though to have been fired from the same gun barrel. The algorithm was trained on the Hamby Study using roughly 18,000 bullets and used topographical images of the bullets to analyze them. I noticed that the algorithm was tested on a sample from the Phoenix Police and asked how accurate the results were. Heike said that it was able to correctly determine most of the bullets, but there were several errors meaning that it isn’t perfect yet.