Latent Print Verifications (and Confl... Log Out | Topics | Search
Moderators | Register | Edit Profile

Latent Print Examination » Questions from FP Experts for FP Experts... Processing, Testimony, and Technical Matters » Latent Print Verifications (and Conflict Resolution) « Previous Next »

Author Message
Top of pagePrevious messageNext messageBottom of page Link to this message

Webservant (Member)
Moderator
Username: Member

Post Number: 232
Registered: 03-1997
Posted on Monday, June 25, 2007 - 07:31 am:   Edit Post Delete Post View Post/Check IP Print Post    Move Post (Moderator/Admin Only) Ban Poster IP (Moderator/Admin only)

Michelle and Ignacio's answers below are excellent. I want to stress the need to take less experienced Latent Print Examiners' opinions very seriously, to not discount them automatically in the presence of conflicting opinions by seasoned veterans.

It is possible for any Latent Print Examiner to make an error, even well-qualified CLPEs. That is the reason for standards and guidelines set by professional organizations such as the IAI, SWGFAST and ASCLD/LAB. The quality control (verify every identification) and quality assurance (certification, annual proficiency testing, case review, laboratory accreditation) standards and guidelines set by those organizations are invaluable to serve and protect society through accurate casework.

As we grow older and continue working at Latent Print Examination for decades, there is a tendency among some of us to believe we perpetually gain more ability to recognize and understand all nuances of friction ridge impression detail, as well as all similarities and differences between latent and record impressions. A sort of expertise-creep may provide unfounded belief that with additional years of experience, all Latent Print Examiners can always see ridge detail better than the younger generation, especially detail in impressions lacking good clarity. To some extent that is true, especially during the first couple of years an intern studies to master this science. However, the assumption that experience provides universal superiority is erroneous and can be dangerous to quality control and quality assurance programs. Some younger and less experienced experts may leverage electronic research libraries (e.g., JFI Electronic Archive) and other training resources to attain knowledge, skills and abilities outdistancing old men like me without requiring decades of "hard knocks" experience. And, some older Latent Print Examiners develop bad work habits detected only through quality assurance procedures such as case review and annual proficiency testing.

Having worked over three decades in government laboratories on three continents, I like to think that my observation and interpretation skills have improved with experience, but I am also ever mindful that I am human, and that in any field of human endeavor there will be oversights (errors). The opinion of Latent Print Examiners trained to competency, but having less experience than their seniors, should not be ignored during the quality control and quality assurance process.

There are two red flags common to most erroneous identifications in recent decades:
(1) AFIS hit, and
(2) Only one print identified to the suspect.
Both those conditions existed in what may be the most comprehensive (English language) study of an erroneous identification occurring in an organization with well-trained Latent Print Examiners. The documentation from that study is online at www.usdoj.gov/oig/special/s0601/PDF_list.htm

Those documents from the US Department of Justice, Office of Inspector General (OIG) are worth downloading and reviewing periodically when Latent Print Examiners forget about the weight of the "one discrepancy" rule relative to a large number of "matching" minutiae.

On page 64 of that full report, it states that the FBI found in excess of 15 points of comparison during their examination, and considered the match to be a 100% identification. The OIG report faults many factors which contributed to the error, including circular reasoning, where the record print influenced interpretation of some unclear ridge endings and bifurcations, and failure to rigidly apply the one discrepancy rule.

On page 271, the report recommends six laboratory procedures (in addition to other procedures already implemented) important to ensure accuracy:
(1) developing criteria for the use of Level 3 details to support identifications,

(2) clarifying the "one discrepancy rule" to assure that it is applied in a manner consistent with the level of certainty claimed for latent fingerprint identifications,

(3) requiring documentation of features observed in the latent fingerprint before the comparison phase to help prevent circular reasoning,


(4) adopting alternate procedures for blind verifications,

(5) reviewing prior cases in which the identification of a criminal suspect was made on the basis of only one latent fingerprint searched through the FBI's Integrated Automated Fingerprint Identification System (IAFIS), and

(6) requiring more meaningful and independent documentation of the causes of errors as part of the Laboratory's corrective action procedures.
In my personal opinion, (maybe worth only what you pay to read this), I recommend a couple of steps an agency should consider as part of conflict resolution (not to be confused with normal case review or blind verification) when there are questions about discrepancies versus distortion and the potential that circular reasoning is influencing interpretation of ridge detail in areas of poor clarity. These two recommendations are related to the two red-font recommendations from the OIG report, listed above. The steps are:
(A) Have additional "trained to competency" Latent Print Examiners evaluate the latent print only (8" x 10" photos) and produce tracings (using a permanent marker on a sheet of clear plastic, such as a document protector) of what they consider to be reliable ridge detail in the latent print. Do this in a proctored environment and use mirror image reversed photos of the latent print to minimize the opportunity for influence from prior knowledge about the latent print. Because you are only requesting that they trace what they consider to be reliable ridge detail, they should not need a great deal of time, maybe 10 to 20 minutes. Comparison of their (position reversed) tracings with the record print may help refute or confirm the presence of circular reasoning (giving more weight to "similar" features than to equal or better quality dissimilar features due to influence from the record print). You might consider removing the latent print image from each examiner, then flipping over their tracings and asking them to evaluate a comparison of their tracing with the record print (to see if their unbiased interpretation of reliable ridge detail supports or refutes an identification).

(B) Invite Latent Print Examiners who are claiming an identification to prepare a written explanation for each discrepancy, especially addressing visual indicators for specific distortion in each instance, and the clarity of each discrepancy relative to the clarity of all (not just some) "matching" level one, level two and/or level three features. Explanations for distortion may turn out to be unwarranted rationalization by well-meaning, very experienced senior Latent Print Examiners... influenced by circular reasoning. Conversely, explanations for distortion may be accurate, scientific explanations that will appropriately convince other Latent Print Examiners that there is no discrepancy.
What should you do when conflict resolution leaves you with uncertainty? What about a situation with 15 plus matching points charted by seasoned Latent Print Examiners (e.g., the OIG report mentioned above), but with other fully-trained Latent Print Examiners refusing to accept distortion as an explanation for what they consider one or more discrepancies? I like to remember the American baseball words of my friend Paul Llewellyn, CLPE: "A tie goes to the runner." It is not an identification.

The above comments do not purport to the reflect the position or opinion the US Department of Defense, the US Army Criminal Investigation Command, the US Government, or any other organization with which Ed German was or is associated. They are only the personal opinion of Ed German.


Top of pagePrevious messageNext messageBottom of page Link to this message

Ignacio Acosta (Unregistered Guest)
Unregistered guest
Posted From: 216-188-252-140.dyn.grandenetworks.net
Posted on Wednesday, June 20, 2007 - 12:43 am:   Edit Post Delete Post View Post/Check IP Print Post    Move Post (Moderator/Admin Only) Ban Poster IP (Moderator/Admin only)

In the agency I have worked for the past 18 years, now retired, we verified each other's idents. My last supervisor stated that when an ident is made by CLPE then it had to be verified by a CLPE. If the verifier did not agree then it went to another CLPE in the same agency (back when we had 4 CLPE's).
If each of the three, the one making the ident and the other two did not agree then it was considered a no hit.
Top of pagePrevious messageNext messageBottom of page Link to this message

Michele Triplett (Michele_triplett)
Member
Username: Michele_triplett

Post Number: 13
Registered: 08-2006
Posted on Wednesday, June 13, 2007 - 11:31 am:   Edit Post Delete Post View Post/Check IP Print Post    Move Post (Moderator/Admin Only) Ban Poster IP (Moderator/Admin only)

Eric,

It sounds like there may be more to this issue than can be addressing in an on-line forum but it’s an important topic so I’ll try to hit the main areas.

Verification isn’t a scientific requirement but that doesn’t mean it’s not important. Due to the serious nature of our conclusions, verification should always be done and this thought is supported by SWGFAST. BUT…..that doesn’t mean it’s important for the conservative examiner to verify the print. I believe the conservative examiner should be praised for not giving in to the pressures his agency is putting on him. Standing up for your own conclusion is a sign of an independent conclusion. Many people feel that independent conclusions have to come from an outside source but independence just means you’re not swayed by others or you’re not obligated to go with your agencies conclusion simply because you work for them. This quality is also very valued in our field but it’s hard to test for.

Anyway, many offices are small and it might not seem like they have too many options for verification. This isn’t always true. I’m sure there are other agencies around that would be glad to verify work every now and then, especially since it sounds like it’s only a few times a year. You may even want to either consider asking your state lab or a neighboring state lab. It’s important that your agency set up a formal policy and not just a willy-nilly system for doing this.

I’ve seen supervisors who thought it looked bad to use an outside source for verification because it showed that it couldn’t be verified within the office. The integrity of the conclusions should outweigh how it looks to others. This can easily be explained in court if it’s being sent out for the right reasons. This brings up the most important topic, what it’s the real reason it’s being sent out?

Is the reason that it can’t be verified in house really due to the conservatism of the other examiner? How do you know this is the reason? If all examiners have different tolerance levels, what is the tolerance level of your agency? This sounds like a strange question but agencies need to have some sort of standards. Some agencies require ‘a verifier’. This means if anyone will verify the print, that’s good enough. That may not be a good policy because it leads to verification shopping. Other agencies want sufficient justification behind a conclusion. Others want conclusions that are demonstratable. Scientifically speaking, science wants conclusions that will hold up to scrutiny and stand the test of time. Maybe your agency needs to set some clear standards. Although I can hear your supervisors sigh from here, several court cases are already pushing agencies to do this so it would be beneficial to start working on this now.

Could it be a training issue? Is one examiner using all levels of detail and the other only using level 2 detail? Is it possible that instead of the verifier being ultra-conservative, that the original examiner may be pushing things a little? Perhaps the original examiner is 100% certain of the ID but the weight of the ID is in his mind and not in the characteristics in the print. If the weight were in the characteristics then perhaps he needs help articulating this to the verifier. Explaining an ID doesn’t mean he’s biasing you or pressuring you, eventually he may need to explain this ID in court so it’s important that he can support his ID now. Maybe the original examiner needs training on justifying and demonstrating his conclusions? Or perhaps it’s just a difference in eye sight abilities? Before any decisions about sending the print out for verification can be made, a thorough investigation of the circumstances need to be looked into. I also think it’s important to document in each case why it was sent out for verification. In some cases it may be due to a backlog but in other cases it could be because the verifier doesn’t agree.

I mentioned the importance of having the examiner justify his conclusion (why it’s an ID), well it’s also important for the verifier to justify his conclusion (why it’s not sufficient). We don’t need this in every case but when two people come to different conclusions it’s important to know what each of them are basing their conclusion on. Both of these conclusions should be kept in the case file. Whether you’re accredited or not, it’s also important to have a conflict resolution plan (ASCLD/Lab requires it). How the conflict was resolved (sent out to another agency for additional analysis or decided not to ID) should also be documented and kept in the case.

Am I expecting too much considering the size and resources of most agencies? You bet!!! As I stated at the beginning, the ramifications of our conclusions are huge and we all need to take our responsibility very seriously.

Good Luck,
Michele Triplett
Michele.triplett@metrokc.gov
Top of pagePrevious messageNext messageBottom of page Link to this message

Eric Soderlund (Rustie)
Member
Username: Rustie

Post Number: 1
Registered: 06-2007
Posted on Tuesday, June 12, 2007 - 08:33 pm:   Edit Post Delete Post View Post/Check IP Print Post    Move Post (Moderator/Admin Only) Ban Poster IP (Moderator/Admin only)

Scenario: Two analysts work together in an identification unit. Both are experienced analysts with over twenty-five years of comparison experience. One analyst is more conservative in terms of performing identifications. During the prior year, he verified perhaps hundreds of latent print identifications. However,during the year timeframe, three latent prints needed to be sent to another analyst for verification. The analyst that didn't verify these prints didn't believe that it wasn't the person identified, but that due to such issues as clarity/minimal points was reluctant to identify the impressions. The Manager believes that these three prints not being verified is a serious matter. He believes that any latent print that is identified by an examiner needs to be verified by any competent latent print examiner. Question: Should the analyst that didn't identify the three latent prints face any kind of discipline? Your input is appreciated. Thanks

(Message edited by Rustie on June 12, 2007)

(Message edited by Rustie on June 12, 2007)

(Message edited by Rustie on June 12, 2007)

Add Your Message Here
Post:
Username: Posting Information:
This is a public posting area. Enter your username and password if you have an account. Otherwise, enter your full name as your username and leave the password blank. Your e-mail address is optional.
Password:
E-mail:
Action: