1pm Session: Session Name Here Statistical Foundations

While I do have some background with statistics, I struggled with comprehending everything these experts discussed, however, I was able to make some small conclusions. The main topics of discussion primarily involved the following: access to data and finding a balance between forensic science and statistics (should it be its own discipline?)

Data Access: This topic is as simple as it sounds. A lot of statisticians are having trouble drawing conclusions from the data they’ve collected because it is either not enough data or circumstantial data. With the former issue, there are a lot of legal complications that limit their access to the data they need in order to improve specific realms of forensic science. For example, mobile phones are extremely limited in what data you can access because of privacy issues; the data that is accessible is just dumped on forensic scientists and it is just piles on piles of information that is overwhelming and challenging to separate between necessary and unnecessary information. The latter issue is with data that has been collected, but only on a certain day. For example, you could have a bunch of data come in on shoeprints, but all the scanned shoeprints deal with all brand new tennis shoes that have only been worn a minimal amount of times. So the data you would have collected would only be applicable to fairly new shoes. So essentially there isn’t enough data out there that statisticians and forensic scientists have access to in order to draw new conclusions on specific types of evidence. s

Balance Between Forensics and Statistics: So this topic presented a challenge to everyone in the discussion because no one had a well-defined solution. Most people thought that it is too often that statisticians try to meet forensic scientists on their end, but not always vice versa. I can see how this would happen because statisticians need to know what forensic scientists consider significant evidence and learn how they analyze a variety of patterns. Once they have this they can do their magic and come up with models for these different forms of evidence. Forensic scientists on the other hand only want the end result of what these statisticians come up with versus understanding the process behind these results. It is very frustrating how forensic scientists don’t always meet us on our end, but they expect us to go to them. Another thing that is frustrating is statisticians don’t always know what forensic scientists always consider to be significant evidence. For example, if statisticians are developing a model for finger print analysis and when they finally finish they find out forensic scientists don’t even use them in cases, they just wasted all this time and money on something that is basically useless (poor example, but just for arguments sake). The other challenge that statisticians have is taking a complex understanding of something they have developed for years and shrinking it down to its most basic form for others to understand. This isn’t just for forensic scientists, but for everyone in the courtroom.

3:15pm Session: Session Name Here Firearms/Handwriting

I really enjoyed this discussion because it was completely foreign to me. I learned a lot from the little bit of information that I collected.

Handwriting When you evaluate a suspect’s signature/handwriting, the most logical thing to do is to compare it to another piece of writing. Well just by looking at the handwriting, you can’t make a definite conclusion of whether or not the two pieces are or are not a match. Someone could have forged the signature, the suspect could have written with his other hand to make it not look like his usual handwriting, etc. The list goes on with what could have happened. So Dr. Stern is developing trying to develop a score-based model on whether or not signatures could or could not be matches. The process is a little fuzzy to me, but essentially, you take whatever piece of handwriting you have and collapse it down on itself. This will result is a series of points; you take the distance between each set of points and assign that distance a vector (I think?) When you look at the code for a vector, it looks like a series of “0s” and “1s”. So when you compare the handwriting, if they have matching vectors, it could potentially be a match.

Ballistics: This was a topic many of the researchers were passionate about. There was a lot of discussion on machine error, how can we evaluate a bullet and case simultaneously, associations with case and bullet, etc. When machines scan a bullet, the image produced is assumed to be a perfect match; however, when running the machine several times in a row on the same bullet, there is a slight difference between each image. So when doing bullet comparisons, there is a source of error with the machine that needs to be considered. A question that was brought up was if you could scan a bullet and a casing together simultaneously. The big issue with this though is that not all guns discharge a bullet upon firing and not all cases collected on a crime scene are necessarily associated with a bullet found on a crime scene. So analyzing these simultaneously presents a challenge to statisticians because how do we distinguish these nuances when we’re not experts in this field. Again this comes back to forensics meeting statistics halfway (which was another topic in this group).

Poster Session: A Shoeprint Comparison Method

I met with Dr. Sarena Wiesner with the Israeli police. She and her undergraduate student were doing a project on developing a statistical method to compare shoeprints with the University of Jerusalem. A question I asked was, “What if the scanned print of the shoe is so worn, it is barely distinguishable to the point where you may not even be able to tell what brand of shoe it could be.” Her answer to this was essentially that they’re still in the process of getting all different types of shoes and evaluating various prints. The results they concluded were basically more work still needs to be done before they can make any conclusions based off of what they currently have.