Of all the intriguing research presented on Tuesday at the All Hands Meeting, the category that really stood out to me was Statistical Foundations and Human Factors. These projects focused less on actual evidence and more on how to improve processing and evaluating forensic evidence.

Dr. Simon A. Cole, from the University of California, Irvine presented a project in this category. He started off by giving a five-minute report on the Analysis of Forensic Testimony and Reports. In his report, he discussed the relevance of understanding forensic evidence to exonerate innocent people. He stated that the misapplication of forensic science is the second top reason for wrongful convictions. The first reason is eyewitness misidentification.

The title of his poster was “Analysis of Forensic Evidence and Wrongful Convictions.” The co-authors were Alyse Berthenthal and Matt Barno. The purpose of this project is to understand empirically what forensic evidence is used in convictions found to be wrongful, and how much evidence was misused in trial. The main research questions were, “What areas of forensic science are most frequently present in cases resulting in wrongful convictions?” and “What problems with forensic evidence appear in cases that can be addressed by statistics?” The data for this study came from the National Registry of Exonerations. They used descriptive statistics of 524 cases and 29 forensic disciplines. Together with qualitive coding, an analytical method to facilitate analysis, they formed their data results. Dr. Cole used bar graphs, pie graphs, and line charts to show the percentage of exonerations by contributing factors, false or misleading forensic evidence, and exonerations by year (DNA and Non-DNA evidence), respectively. Dr. Cole and his research partners concluded that “high rates of false and misleading forensic evidence suggest structural errors in admission and presentation of forensic science in criminal trials.”

The second area of research that interested me was Pattern Evidence. Dr. Hal Stern, from the University of California, Irvine, and the Co-Director of CSAFE, presented the project report in this area of CSAFE research. His presentation was entitled “Signature Complexity and Forensic Science Document Examination.” He partnered with Dr. Alicia Carriquiry, Director of CSAFE, and graduate student Amy Crawford, both from Iowa State University, on this project. Crawford presented the poster in the afternoon session which was entitled “Statistical Analysis of Letter Importance for Document Examination.”

The main research focus for this project was “How can we use features of a document to compare handwritings?” Data for this study was collected from the University of Maryland’s Computer Vision Lab (CVL) database (6 written paragraphs). The software Flash ID, developed by NIST, was also utilized to determine authorship by breaking down words into pieces called graphemes. Graphemes were grouped and labeled according to the number of nodes (where three or more lines connect) present and where they connect. Several paragraphs from different writers were analyzed by comparing number of graphemes used and the questioned document (true author). The statistical method used was the random Forest, to classify graphemes in order of importance. Then, the Bayesian Hierarchical method, for which Dr. Stern is well-known, was used for probability of origination.

Their analysis reflected the notion that it takes more information than graphemes and nodes. They concluded that it is difficult to infer whether simplistic questioned writing falls within the natural variation of the true author, and out of others. Future aims include developing a source that can give a score of similarity by likelihood ratio of two documents, both with unknown authors. The team plans to reroute their research to answer the questions, “What’s important to measure about the handwriting” and “What’s different in similar structures?”