“Quality Metrics For Pattern Evidence”

“Quality Metrics for Pattern Evidence” is a project based upon fingerprint analysis. The main objective or question on the project is “Given a latent print, can we use the fingerprint’s quality to determine the probability LPEs (Latent Print Examiners) will find the right match?” The project “Quality Metrics for Pattern Evidence” is authored by Karen Pan and Karen Kafadar. Pan is a graduate student at the University of Virginia and Dr. Kafadar is her Principal Investigator.

The main statistical measurements that were used throughout the research was displayed in their data, came from databases such as: NIST SD27a pairs, a creation database by Professor Keith Inman, 300 mated pairs from the Defense forensic Science Center, thousands of mated pairs from Ron Smith and Associates Inc, and four different quality metrics: Perskin and Kafadar, DFIQI, LQM, and the SNoQE. First the Perskin and Kafadar metric examines the gradient of the contrast intensity around a feature in a print, and the DFIQI (Defense Fingerprint Image Quality Index)is a “fingerprint statistical modeling tool.” The LQM metric can also be known as LQAS (Latent quality assessment software).

The data displayed from the research displayed the global quality scores of three NIST SD27a fingerprints. Those three fingerprints were labeled either good, bad, or ugly. The LQM and the SNoQE metric was also taken for those prints. The score from the Peskin and Kafadar was also provided in the data. The next set of data was from the CTS latent proficiency test images, and those prints were measured using the quality metrics: LQM, VID, VCMP, and SNoQE.

Based on Pan and Dr. Kafadar’s Conclusions, their results are found useful for assessing “level of difficulty” in proficiency test, and experiments comparing different approaches. The future direction for the “Quality Metrics for Pattern Evidence” project is to try to start combining all of the metric systems into one complete system. The purpose of a complete system is so that there will be one central metric system instead of having four different ones.

References

Triplett, Michele. “Michele Triplett’s Fingerprint Terms.” Michele Triplett’s Fingerprint Dictionary, 2002, www.fprints.nwlean.net/d.htm.

Pan, Karen and Kafadar, Karen. “Quality Metrics for Pattern Evidence.” Poster Presentation at CSAFE All Hands Meeting, 2018-06-13

“Latent Print Profiency Testing”

“Latent Print Proficiency Testing: How Do Analysts Perceive Test Items?” is a project with the purpose of determining how analyst percieve testing item within a survey. The authors are Dr. Brett O. Gardner a Post Doctorial, Dr. Daniel C. Murrie, Dr. Sharon Kelley, and Kellyn N. Blaisdell from the University of Virginia.

Dr. Gardner shared that there data was collected by “321 fingerprint analysts who completed supplementary survey questions on the CTS latent fingerprint proficiency test in October of 2017.” A proficiency test is “a brief series of questions that can determine the proformance of specific activities and can also determine how well something is going.” The Collaborative Testing Services latent print proficiency test required participants to compare 11 latent prints and then asked questions about the survey.

For their data, they measured the average perceived difficulty of test items and the average perceived similarity to casework. In addition, the statistical methods that were used was self-reporting, averaging, and survey percentages.

The results that were discovered from the project are that “almost all participants endorsed maximum confidence regarding the item they perceived as least challenging”, “participants also endorsed very high confidence regarding the items they perceived as most challenging”, and that the “accuracy across items was high. Out of the 290 participants who submitted a latent print examination test and survey responses, only 11 reported an erroneous response on any test item.”

According to their poster, they also concluded that participants were “highly confident in their decisions, and overall accuracy was high” and that the “overall results of the present study highlight the need for more rigorous, or better operationalized, means of training and assessing proficiency.”

The future direction for the project might examine whether participants “subjective reasons for prints being challenging align with the actual quality of images based on image quality metric.” Their future direction is to get more data and research so that their research findings can be used to help and collaborate with the authors of the “Quality Metrics for Pattern Evidence” project. The collaboration of two fingerprinting projects could be the step, but we will have to wait and see.

Refrences

Collaborative Testing Services, Inc. (2016). Latent print examination no. 16-515/516 summary report. Retrieved from https://www.ctsforensics.com/assets/news/3616_Web.pdf

Gardner, Brett O. et al. “Latent Print Proficiency Testing: How Do Analysts Perceive Test Item?.” Poster Presentation at CSAFE All Hands Meeting, 2018-06-13

Photo with Karen Pan: