1pm Session: Fingerprints
The first session started off with talking about how fingerprints are very different from other disciplines and how they are regarded as sure things. One person in the discussion described them as special. Another stated that it is the number one form of identification because its more affordable. Fingerprints are found reliable based on assumption and what’s been seen. But the fact that the portions of fingerprints are different when compared between found at a crime scene and on file or in life applications. The fingerprints should be look at by quality and quantity with partial or full. The last point brought up was that fingerprints are found at a crime scene unlike retina or lip prints.
The topic then shifted to talk about a QM system along with the older version of the QM system. This system is for the comparison of latent prints to the ones on file. A complaint that when looking at the prints by themselves they were clear but when put together the images become distorted and unclear. This lead into the what factors are used to determine the level of the print. One person went based off area/direction of hand, features in the print, and clarity of the print. Another person leveled them based on the person examining the print and the difficulty of the print. But there are no standards for fingerprint and it is a complex comparison. The conversation then shifted to what I found the most interesting. Which was that to do new research one should look at past research and compare. Using past work can make the current research better.
3:15pm Session: Digital Forensics
In the second session, the group talked about a past conference that they project leaders had attended and then started to get through one of the projects but ran out of time before getting to other projects. Project StegoDB was the most fascinating one to me and luckily it was the one that got explained in more detail. In the project that was discussed the leader was learning about steganography which is hiding a message in an image. So, sensitive data aka payload is hidden in an innocent file aka cover. This is then sent to someone and the payload is retrieved by using a password to decode the cover. The code is embedded by changing the gray scale by one and within one image is about 30,000 code pieces. Changing bits of the image changes the statistical profile of the image. This database help law enforcement in justifying the admissibility of using Stego Detection tools. Some interesting findings from preliminary studies will stimulate future research in image forensics and stego detection tools.
A few brief points on the other projects were made. UCI Change Detection is having difficulty getting real-world data sets and is working on expanding utility. Mobile App Forensics Analysis is working on translating their information into practical application and methods. Digital forensics is slowly becoming more useful but it is still trying to find the link to statistics to make the data more accurate.
Poster session
For the poster session, I interviewed Neil Spencer. His poster was called A Probabilistic Model for Shoeprints Accidentals; this was studied at Carnegie Mellon University. The poster was about developing a model for the locations of accidentals on the sole of a shoe. A result that was collected was that the model is as good as the one being used currently and that in further testing he hopes to surpass the current model. The question that I asked was what exactly are the noise terms. The answer was that it is the unnecessary information that was taken and then subtracted out.