The first poster I visited was presented by Dr. William Thompson, a faculty member at the University of California, Irvine. His co-author was Dr. Nicolas Scurich. The title of his poster was “Jurors’ reactions to testimony about contextual bias.1” He and his team were aiming to see how jurors respond to contextual bias and subjectivity when forensic experts are questioned about it, and additionally how to best reduce them. Contextual bias is when well-intentioned experts allow outside influences (such as demographic influences about the suspect or victim) to affect their judgement. The data used were the responses of the jurors after reading a series of testimonies in a mock case. The variables were whether the expert (a forensic odontologist) was questioned about subjectivity in their field, and how the expert responded when asked if they were exposed to potentially biasing information. The units of observation were how the jurors rated their scientific credibility and if they voted to convict the mock defendant.
The jurors were given a written mock case and questioned about their impressions afterwards, and asked whether they would vote to convict the defendant. The participants were all real jurors who had recently participated in a court case. Their responses were analyzed using analysis of variance and linear regression in SPSS, a program that allows researchers to conduct statistical analysis. The results of the study showed that the jurors rated the expert’s scientific credibility lower when the expert was questioned about subjectivity than they would if they were not questioned. They rated the expert’s credibility higher when the expert claimed they were not exposed to biasing information and were not questioned about subjectivity in their field. More jurors voted to convict the defendant when the expert was not questioned about subjectivity in all cases except where the expert said they were blind to biasing information. When the expert claimed they were blinded to biasing information, the jurors were equal in their votes to convict the defendant regardless of whether the expert had been cross-examined about subjectivity. Their future plans are to use a real life simulation instead of a written case to see if the jurors respond differently. Researchers will use this information to improve their communication when testifying and in efforts to reduce unintentional bias and subjectivity in their field.
I asked him about the education level of the mock jury and found to my surprise that most were highly educated, and a few had advanced degrees in a scientific field. When I asked whether or not this made them more likely to question scientific evidence, we began speaking about whether another experiment with a different jury (one that was less educated) would have different results.
The second poster I visited was titled “Statistical Analysis of Letter Importance for Document Examiners2” and was presented by Amy Crawford, a graduate student at Iowa State University. Her co-author was Dr. Alicia Carriquiry. The main research question was whether writing characteristics can be found that can discriminate between the authors of two questioned documents. The data was provided from databases and converted into pixel-wide skeletons so the program didn’t factor in the thickness of the line. This is important when analyzing documents written by the same author but with different mediums, for example two letters written in pen and pencil.
They had six paragraphs written by nine different authors. Five paragraphs were used for modeling and one was used for a questioned document. The words on the documents were separated out into smaller graphemes that could be read easily by the system. Graphemes are the smallest meaningful unit in writing, often a letter or two letters connected. Dots called “nodes” were assigned in places on each grapheme where the line sharply turned or where it intersected with two other lines. The system looked at the nodes to analyze the general shape of the letter. Different groups were assigned based on the number and shape formed by the nodes, and the system looks at how they connect to each other to determine a match. The data was analyzed using a randomforest package in R to determine the most significant graphemes and a Bayesian hierarchal method to model the authorship predictions.
The variables in the experiment were the number of comparisons the system made in each questioned document, and the observational units were the questioned documents. The results indicate that having 15-30 points of comparison between a known standard and a questioned document will return accurate predictions, assuming that the graphemes the system is looking for are in both the known standard and the questioned document. They also found that the graphemes H, ELL, O, U and L were the most important for predictive analysis. Their future plans include analyzing the curvatures of the loops and lines in letters to further discriminate between authors. Researchers will use this data to build a statistical foundation in handwriting analysis and reduce human error and bias.