Forensic science is not as dependable as you might think
(by Spencer S. Hsu, The Washington Post, April 22, 2012, Link)
In Hollywood, the moment the good guys trace a hair, a bullet fragment or a fingerprint, it’s game over. The bad guy is locked up.
But the glamorized portrait is not so simple in real life.
Far from infallible, expert comparisons of hair, handwriting, marks made by firearms on bullets, and patterns such as bite marks and shoe and tire prints are in some ways unscientific and subject to human bias, a National Academy of Sciences panel chartered by Congress found. Other techniques, such as in bullet-lead analysis and arson investigation, survived for decades despite poorly regulated practices and a lack of scientific method.
Even fingerprint identification is partly a subjective exercise that lacks research into the role of unconscious bias or even its error rate, the panel’s 328-page report said.
“The forensic science system, encompassing both research and practice, has serious problems that can only be addressed by a national commitment to overhaul the current structure,” the panel concluded in 2009.
Now, Congress and the Obama administration are trying to regulate forensic science to help establish standards. Senate Judiciary Committee Chairman Patrick J. Leahy, D-Vt., and Commerce, Science and Transportation Committee Chairman John D. Rockefeller IV, D-W.Va., are weighing legislation that could subject techniques to greater scientific scrutiny and help establish their ranges of accuracy.
A Leahy bill would create a new office of forensic science in the Justice Department. Rockefeller is preparing legislation to expand the role of the National Science Foundation and the National Institute of Standards and Technology in setting scientific standards and research goals.
The Obama administration is also looking to “strengthen the linkage between cutting-edge science ... and the forensic tests used by law enforcement,” said Rick Weiss, spokesman for the White House Office of Science and Technology Policy.
Police and law enforcement agencies have rebuffed recommendations to remove crime labs from their control.
Since 2002, failures have been reported at about 30 federal, state and local crime labs serving the FBI, the Army and eight of the nation’s 20 largest cities.
Advances in DNA testing are exposing errors at unexpected rates. In November, researchers with the Urban Institute reported that new DNA testing appeared to clear convicted defendants in 16 percent of Virginia criminal convictions between 1973 and 1988 in which evidence was available for retesting.
A 2009 study of postconviction DNA exonerations — now up to 289 nationwide — found invalid testimony in more than half the cases.
“There are just too many related problems for this to be dealt with ad hoc,” said Brandon Garrett, a professor at the University of Virginia School of Law.
More DNA testing alone is not the answer, experts say. Biological evidence historically is collected in fewer than 20 percent of criminal cases. Other questioned forensic techniques are used far more often, with mistakes harming defendants and crime victims whose true assailants remain at large.
The National Academy of Sciences report cited the lack of effective standards for examiners, laboratories and court testimony. It also criticized Justice Department agencies for a dearth of research into problems and for being “too wedded” to the status quo to be trusted to lead reforms.
“This is our generation’s sole opportunity” to get arguments out of the adversarial system and resolved through science, said Thomas Bohan, who was president of the American Academy of Forensic Sciences in 2010. “It’s a shame they couldn’t have done a good job 10 or 20 years ago.”
Arson investigation is an example of how research has dramatically improved practices.
Since 1990, the number of U.S. structure fires attributed to arson has dropped by half. One reason is that scientific test burnings have disproved the notion that some burn marks could be caused only by liquid accelerants.
Meanwhile, scientific doubts have festered for decades with fingerprint examination. While fingerprint analysis is one of the most valuable and frequently applied investigative tools, its accuracy has not been scientifically defined.
FBI examiners claimed until recently that they can match fingerprints to the exclusion of any other person in the world with 100 percent certainty using a method with an error rate essentially of zero. The academy report found that assertion was “not scientifically plausible” and had chilled research into error rates.
In 1999, a Justice Department official, Richard Rau, told a federal court that the department delayed such a study because of the legal ramifications. As recently as last year, Pennsylvania State University researcher Cedric Neumann was denied a department grant to determine potential fingerprint error rates using closed cases.
Neumann declined to comment for this article.
A person familiar with the episode blamed a polarized climate in the adversarial legal system, saying, “Few agencies in the forensic-science community want to be the first ones associated with an error rate.” The person spoke on the condition of anonymity to discuss sensitive federal research funding decisions.
Meanwhile, errors occur. In 2004, DNA for the first time exonerated a person convicted with a fingerprint match and, separately, the FBI made its first publicly acknowledged fingerprint misidentification. Brandon Mayfield, a Portland lawyer, mistakenly was arrested in connection with the terrorist train bombings in Madrid that killed 191 people. The FBI apologized.
Since then, the Justice Department has begun research to try to quantify how complete a fingerprint must be to properly declare a match; how different conditions may affect the reliability of examinations; whether computers can do such work; and how to present forensic testimony about probabilities to judges and juries. The FBI has also required “blind verification” of results by agents unfamiliar with initial examinations.
The bureau said that skilled analysts are extraordinarily accurate, at least when they know they are being tested. An FBI study with Noblis Corp. last year found that when 169 examiners compared thousands of fingerprints and decided there was enough information to declare a match or not, they were correct 99.8 percent of the time.
Still, the Mayfield case highlighted the need for research into real-world conditions. A 2006 study by a London-based scientist, Itiel Dror, asked experts to analyze fingerprints that, unbeknownst to them, they had analyzed earlier in their careers. This time, however, examiners were given biasing statements, such as that a suspect had confessed or that a suspect was locked up at the time of the offense. In 16.6 percent of cases, examiners reversed earlier judgments.
Crime lab directors and prosecutors welcome calls for more money for research and to improve examiners and facilities. But with budgets tight at all levels, Washington has few other tools to prompt 350 state and local labs across the country to improve.
The fact that a technique has not been scientifically proven does not mean it does not work, defenders say, and mistakes can be handled traditionally through case-by-case appeals.
“In the real life of the criminal justice system, we need more resources for those who are on the front lines,” said Scott Burns, executive director of the National District Attorneys Association. Noting that prosecutors handle 20 million nontraffic cases a year, Burns said, “The sky isn’t falling, and we usually get it right.”
Pete Marone, director of the Virginia Department of Forensic Science and chairman of the Consortium of Forensic Science Organizations, urged Congress not to “reinvent the wheel” by abandoning all existing accreditation standards or groups such as the one he represents.
“Don’t judge forensic science today based on errors from 30 years ago,” Marone said. “What we need is someone setting a research agenda and direction. ... We need leadership.”
The ‘CSI’ effect
Popularized in fiction by Sherlock Holmes, hair comparison became an established forensic science by the 1950s. Before modern-day DNA testing, hair analysis could, at its best, accurately narrow the pool of criminal suspects to a class or group or definitively rule out a person as a possible source.
But in practice, even before the “‘CSI’ effect” led jurors to expect scientific evidence at every trial, a claim of a hair match packed a powerful, dramatic punch in court. The testimony, usually by a respected scientist working at a respected federal agency, allowed prosecutors to boil down ambiguous cases for jurors to a single, incriminating piece of human evidence left at the scene.
Forensic experts typically assessed the varying characteristics of a hair to determine whether the defendant might be a source. Some factors were visible to the naked eye, such as the length of the hair, its color and whether it was straight, kinky or curly. Others were visible under a microscope, such as the size, type and distribution of pigmentation, the alignment of scales or the thickness of layers in a given hair, or its diameter at various points.
Other judgments could be made. Was the hair animal or human? From the scalp, limbs or pubic area? Of a discernible race? Dyed, bleached or otherwise treated? Cut, forcibly removed or shed naturally?
But there is no consensus among hair examiners about how many of these characteristics were needed to declare a match.
Justice Department officials have known for years that flawed forensic work might have led to the convictions of potentially innocent people, but prosecutors failed to notify defendants or their attorneys even in many cases they knew were troubled.
Officials started reviewing the cases in the 1990s after reports that sloppy work by examiners at the FBI lab was producing unreliable forensic evidence in court trials. Instead of releasing those findings, they made them available only to the prosecutors in the affected cases, according to documents and interviews with dozens of officials.
In addition, the Justice Department reviewed only a limited number of cases and focused on the work of one scientist at the FBI lab, despite warnings that problems were far more widespread and could affect potentially thousands of cases in federal, state and local courts.
As a result, hundreds of defendants nationwide remain in prison or on parole for crimes that might merit exoneration, a retrial or a retesting of evidence using DNA because FBI hair and fiber experts may have misidentified them as suspects.
In one Texas case, Benjamin Herbert Boyle was executed in 1997, more than a year after the Justice Department began its review. Boyle would not have been eligible for the death penalty without the FBI’s flawed work, according to a prosecutor’s memo.
The case of a Maryland man serving a life sentence for a 1981 double killing is another in which federal and local law enforcement officials knew of forensic problems but never told the defendant. Attorneys for the man, John Norman Huffington, say they learned of potentially exculpatory Justice Department findings from The Washington Post. They are seeking a new trial.
Justice Department officials said that they met their legal and constitutional obligations when they learned of specific errors, that they alerted prosecutors and were not required to inform defendants directly.
The review was performed by a task force created during an inspector general’s investigation of misconduct at the FBI crime lab in the 1990s. The inquiry took nine years, ending in 2004, records show, but the findings were never made public.
21. April 2019