Prosecutors Respond to Calls for Forensic Science Reform: More Sharks in Dirty Water
Initial reactions to the PCAST report from the law enforcement community leave little hope that it will inspire any more reform than the NRC Report has. In the wake of the PCAST report, several law enforcement officials and organizations have commented on the findings and recommendations. Attorney General Loretta Lynch had the following to say:
We remain confident that, when used properly, forensic science evidence helps juries identify the guilty and clear the innocent, and the Department believes that the current legal standards regarding the admissibility of forensic evidence are based on sound science and sound legal reasoning . . . . While we appreciate [the PCAST report’s] contribution to the field of scientific inquiry, the department will not be adopting the recommendations related to the admissibility of forensic science evidence.6
The National District Attorneys Association (NDAA) released a similarly critical statement. It suggested that adequate safeguards exist in the criminal justice system to prevent flawed forensic science from entering the courtroom or convicting the innocent. In doing so, the NDAA relied on the supposed strength of judges as evidentiary gatekeepers and the ability of defense attorneys to conduct effective cross-examinations.7 The NDAA statement concluded that, “Adopting any of [PCAST’s] recommendations would have a devastating effect on the ability of law enforcement, prosecutors and the defense bar, to fully investigate their cases, exclude innocent suspects, implicate the guilty, and achieve true justice at trial.”8
These statements by Attorney General Lynch and the NDAA highlight several fundamental issues facing the criminal justice system in its use of forensic science. First, both Attorney General Lynch and the NDAA ignore the realities of the criminal justice system. Wrongful convictions and miscarriages of justice occur more often than one might expect,9 and neither the judiciary nor vigorous cross-examination is sufficient to prevent flawed science from convicting innocent persons. Second, these statements and the underlying attitudes and practices suggest a misplaced emphasis on convictions over truth and fairness in the criminal justice system.
The actual number of wrongful convictions is a figure that is unlikely to ever be known. However, exonerations by the Innocence Project and those contained in the National Registry of Exonerations give us a sense of the problem. To date, 344 people have been exonerated by DNA evidence for crimes they did not commit10 and more than 1,500 have been exonerated by other means.11 The Innocence Project notes that of the first 225 exonerations, through 2009, more than fifty percent involved unvalidated or improper forensic science as a contributing factor in wrongfully convicting the defendant.12 Attorney General Lynch’s response to the PCAST report would suggest that these, and any subsequent cases of wrongful conviction in which flawed forensics is a contributor, are the result of either incompetence or individual human error, such as improper collection, processing, or interpretation of evidence. This belief has been referred to as the “Bad Apples” explanation.13 However, as the 2009 NRC Report, the 2016 PCAST Report, and substantial academic research over the past several decades make clear, the “Bad Apples” view is a simplistic mischaracterization of complex, serious, and systemic problems.14 Furthermore, both reports and the existing academic literature highlight the difficulty, if not impossibility, of meeting the conditions necessary to satisfy Attorney General Lynch’s assertion regarding “proper use.” Both the NRC Report and the PCAST Report cast serious doubts on the “foundational validity”15 and “validity as applied”16 of several feature comparison methods, including bite mark, shoe print, fingerprint, and firearm/toolmark analysis.17 Given this, how can we, as Attorney General Lynch suggests, “proper[ly] use” unvalidated techniques and unsubstantiated testimony?
This Essay argues that, contrary to Attorney General Lynch’s statements, there is no “proper” way to use flawed and unvalidated forensic science evidence. It begins by exploring how lack of scientific knowledge and pro-prosecution biases undermine the judiciary’s ability to act as effective gatekeepers of scientific evidence. It also contends that lack of scientific knowledge renders cross-examination by defense counsel unlikely to address the weaknesses and flaws in some scientific evidence. This Essay then turns its attention to the NDAA’s exaggerated concerns about the criminal justice system’s ability to function effectively in the absence of certain forensic techniques. Examining the history of forensic DNA typing, this Essay demonstrates that the legal system can continue to function while rendering inadmissible flawed scientific evidence and exaggerated claims by forensic examiners. Finally, this Essay concludes that despite the efforts of academics and the possibility of improving forensic science through research and collaboration, key law enforcement officials’ attitudes render it unlikely that meaningful reform can happen. This Essay calls for a more open-minded approach and willingness to work with academics and researchers to improve the criminal justice system and reduce miscarriages of justice.
I. forensic science, judges, and lawyers
Attorney General Lynch has misplaced faith in judges as arbiters of the quality of scientific evidence. InDaubert v. Merrell Dow Pharmaceuticals, Inc., the Supreme Court offered its interpretation of the application of the Federal Rules of Evidence on the admissibility of scientific evidence.18 The Court developed tests to assess the relevance, reliability, and admissibility of scientific evidence. Its test for reliability recommends assessing five factors pertaining to the evidence at issue.19This standard makes judges the “gatekeepers” for determining the admissibility of testimony related to scientific evidence. Yet, this procedural safeguard has proven ineffective. Two possible explanations exist for this phenomenon: lack of judicial scientific aptitude and systemic pro-prosecution bias.
Judges are generalists who often have little training in the sciences.20 The 2009 NRC Report noted:
The adversarial process relating to the admission and exclusion of scientific evidence is not suited to the task of finding “scientific truth.” The judicial system is encumbered by, among other things, judges and lawyers who generally lack the scientific expertise necessary to comprehend and evaluate forensic evidence in an informed manner . . . .21
The NRC Report’s assertion came as no surprise. Several years before the NRC Report was released, scholars had already begun to question whether trial court judges were equipped to assess highly technical scientific claims.22 Indeed, trial judges themselves have admitted their inability to handle complex scientific issues.23 In a 2001 study, Sophia Gatowski and her colleagues surveyed 400 state trial court judges, 191 (48%) of whom said they felt they had been inadequately prepared in their education to handle the types of scientific evidence they faced on the bench.24 Gatowski and Richardson found that an overwhelming majority of the judges surveyed could not correctly demonstrate a basic understanding of two of the Daubert criteria: falsifiability and error rates.25
With respect to falsifiability (also known as the testability of the technique), several responses showed an alarming lack of familiarity. Judges said, for example, “I would want to know if the evidence was falsified,” and “I would look at the results and determine if they are false.”26 Chief Justice Rehnquist foreshadowed these problems in his opinion in Daubert, where he observed: “I defer to no one in my confidence in federal judges; but I am at a loss to know what is meant when it is said that the scientific status of a theory depends on its ‘falsifiability,’ and I suppose some of them will be, too.”27 Falsifiability, drawn from Karl Popper’s work on philosophy of science, simply demands that a theory or hypothesis be able to empirically be proven false through scientific testing.28 Thus, judges’ focus in deciding admissibility should be on whether the underlying theory or method of the forensic discipline can and has been tested, rather than whether the results in a specific case are incorrect or have been altered. The judges surveyed exhibited a similar lack of understanding regarding error rates. Only 15 of 364 judges demonstrated even a basic understanding of error rates (e.g., that a technique with too high an error rate should be rejected because of the high risk of being wrong or making a mistake).29 Few understood the notion that an error rate has two components—false negatives (when a test identifies a true positive as a negative) and false positives (when a test identifies a true negative as a positive).30 The study further suggested that judges’ inability to operationalize and implement the Daubert criteria could create inconsistent decisions regarding admissibility, meaning a technique that passes muster in one judge’s court could very well fail the test in a different court.31
Lack of scientific aptitude may not be the only factor at play when explaining the judiciary’s failure to keep bad science out of courtrooms. We should also carefully consider the possibility of a systemic pro-prosecution bias on the bench. This bias may stem from the fact that a significant number of judges are former prosecutors. For example, forty-three percent of President Obama’s nominees to federal trial courts were previously state or federal prosecutors, while only fifteen percent were public defenders.32 The disparity is seen in state courts as well. A 2009 study found that fifteen percent of state supreme court justices had experience as public defenders, while thirty-three percent of the justices had experience as prosecutors.33 In Cook County, Illinois, seventy-five percent of judges hearing felony cases had served as prosecutors, and many of them had served only as prosecutors before becoming judges.34
Other factors may also contribute to a pro-prosecution bias and tough-on-crime approach, including judges’ desire to be re-elected in those states that hold judicial elections. In a 2015 report, the Brennan Center for Justice synthesized the research from a number of studies examining the impact of judicial elections on criminal cases.35 The report found “that re-election and retention pressures systematically disadvantage criminal defendants.”36 While the mere fact that many judges were previously prosecutors and/or seek to be re-elected does not guarantee bias, empirical research suggests that bias against defendants does contribute to admissibility rulings.37
With respect to allowing flawed evidence, at least one scholar has noted that, while courts rigorously engage in gatekeeping in civil cases, there is no parallel approach in criminal cases.38 As a result, criminal defendants tend to lose admissibility challenges to forensic evidence.39 This systemic bias also manifests in judges’ efforts to exclude defense experts from court. For example, in one case, a judge excluded the testimony of the defendant’s expert, who was an expert in the sociology and history of science and technology.40 The defense proposed that the expert, Dr. Cole, testify to the validity and reliability issues associated with latent fingerprint evidence. Although the admissibility of scientific evidence in New York is governed by Frye v. United States,41 which differs from Daubert,the court noted that “[e]ven applying the Federal Courts Daubert Standard what Dr. Cole has offered here is ‘junk science’ . . . . [It is] interesting but too lacking in scientific method to even bloody the field of fingerprint analysis as a generally accepted scientific discipline.”42 The court’s exclusion of Dr. Cole’s testimony was, at best, hypocritical. First, the claims at the heart of Dr. Cole’s work and his expert testimony are that latent fingerprint identification’s reliability, accuracy, and validity are largely unknown.43 One could reasonably conclude from this that latent print identification has largely lacked scientific method. Second, Dr. Cole’s criticisms are echoed by the NRC Report and the PCAST report, clearly indicating that he is not a rogue, contrarian academic, but rather one of many academics who have raised concerns about latent fingerprint identification.44 Courts have exhibited similar resistance to defense-offered expert evidence regarding the reliability of human memory and eyewitness testimony, excluding testimony on a variety of grounds.45 In several cases, courts have held that expert testimony on eyewitness identification is not sufficiently scientifically reliable to be admissible.46 Others have found that it is within the court’s province to instruct the jury on the reliability of eyewitness identification, making the admission of expert testimony improper and unnecessary.47
We do not live in a perfect world where judges are universally capable of using Daubert to adequately distinguish between good and bad science, either because of personal biases or insufficient scientific knowledge, or possibly both. As such, Attorney General Lynch’s faith in the judiciary is both misinformed and misplaced.
Cross-examination is not without its faults either. Although the NDAA places great faith in cross-examination as an effective means of highlighting weaknesses in evidence, it is unlikely that defense lawyers are any more adept at addressing the shortcomings of forensic evidence than judges are. As Professor David Faigman notes, lawyers generally lack training in scientific methods and usually struggle to articulate scientific concepts.48 Half-jokingly, Faigman comments that nothing puts law students to sleep faster than putting numbers on the board.49 Given these facts, how can we expect defense lawyers, many of whom are overburdened with larger than recommended case-loads,50 to subject forensic experts to meaningful cross-examination that would highlight the potential methodological flaws, lack of scientific validity, and possibility for procedural errors? Indeed, even if lawyers could accomplish such a feat, serious doubts would remain about the jury’s ability to understand the significance of these examinations and the subtleties of these attorneys’ challenges. As fictional trial consultant Rankin Fitch points out, the average juror isn’t King Solomon.51
In an ideal world, Daubert may be sufficient to protect criminal defendants from the perils of flawed forensic science. However, a lack of scientific aptitude and pro-prosecution bias render judges ineffective at appropriately admitting and excluding forensic science evidence under Daubert. Additionally, lack of scientific familiarity among lawyers and jurors makes cross-examination unlikely to adequately highlight the flaws in some forensic science disciplines. The best path forward for the criminal justice system involves scientific reform outside of the courtroom.
II. scientific reform can proceed
The NDAA’s hyperbolic response to the PCAST Report borders on contempt for truth and justice. The NDAA implies that the criminal justice system will come to a screeching halt and the guilty will roam free if forensic science disciplines are held to the standards in the PCAST report and forced to reform their practices and procedures. History tells us this is not the case. Evidence can be meaningfully challenged and excluded, scientific disciplines reformed, and eventually evidence from the discipline admitted again without the Four Horsemen roaming the streets of Anytown, USA. Forensic DNA typing, now seen by many as the “Gold Standard” of forensic evidence, faced significant challenges in the late 1980s and early 1990s. These challenges echo many of the same problems faced by disciplines discussed in both the PCAST and NRC reports today—most notably latent fingerprint analysis. And yet, somehow, the criminal justice system remained operational even as forensic DNA underwent a radical transformation.
In fact, DNA profiling is an excellent starting point for discussing how best to reform scientific evidence.52 Forensic DNA as we know it is the product of the Anglo-American legal system interacting with science and technology over the course of a decade.53 The “DNA Wars” of the late 1980s and early 1990s played an essential role in the development and refinement of forensic DNA testing.54 Following Jeffreys’s discovery of DNA testing procedures in 1984, the technique was quickly implemented by law enforcement officials. First used in a U.S. courtroom in 1987, DNA evidence was accepted with little challenge in jurisdictions across the nation shortly after.55 By the end of 1988, forensic DNA evidence had been admitted in nearly 200 cases.56 As Justice Wilkins of the Supreme Judicial Court of Massachusetts noted, DNA acquired an “aura of infallibility,”57 much like fingerprint and other forensic disciplines that have now come under fire in the NRC and PCAST reports.58 Yet, DNA soon came under criticism from a series of lawyers, academics, and expert witnesses. In an article in the Virginia Law Review, Professor William Thompson and Simon Ford framed the admissibility debate quite poignantly. In addition to noting several issues that needed to be resolved,59 they concluded that the stakes were high because of the need to balance the danger that excessive caution could prevent valuable evidence from being admitted in a timely manner with the risk that evidence accepted quickly and uncritically may prove to be less reliable than promised.60 In other words, courts must strike a balance between the risk of letting the guilty go free and convicting the innocent.
In 1989, Barry Scheck and Peter Neufeld, who would later co-found the Innocence Project, mounted the first serious challenge to the validity and admissibility of DNA evidence in People v. Castro.61 Subsequent challenges followed in State v. Schwartz62and United States v. Yee.63 While the defense largely lost the battle in these cases, this series of challenges led to the 1992 National Research Council (NRC) report “DNA Technology in Forensic Science.”64 The report expressed many of the same concerns about DNA evidence that have since been expressed about other disciplines in the 2009 NRC report and the 2016 PCAST report, namely concerns about the reliability and validity of the processes.65 Specifically, the 1992 NRC report noted the potential for errors arising from improperly maintained equipment, reagents, and specimen contamination.66 The report made several recommendations including calling for scientifically reliable and precise procedures, proficiency-testing and audits, lab accreditation, duplicate testing of samples, and further exploration of the issue of population subcultures.67 It also recognized the lack of, and need for, standardization in laboratory procedures.68 Following the 1992 NRC report, several jurisdictions ruled DNA evidence inadmissible, including California and Massachusetts.69
In People v. Barney,70 a California court held that the statistical significance of a match between the defendant’s DNA and the sample taken from the crime scene did not meet the standard for admissibility.71 In Commonwealth v. Lanigan, the Supreme Judicial Court of Massachusetts issued an opinion that highlighted the debate surrounding DNA evidence, particularly with regards to population substructures, in order to hold that the evidence had failed to meet the Frye standard, which Massachusetts used at the time.72 These cases helped move the debate from the courtroom into scientific journals, which focused on how to create lab standards and understand population subcultures.73 Following changes in lab standards, accreditation, and additional research into subpopulations, the debate was laid to rest.74 At this point, courts were once again prepared to admit DNA evidence, its scientific reliability having been enhanced and its evidentiary status fortified. Certainly, if the criminal justice system can survive the challenge and exclusion of what is likely to be the most conclusive forensic feature comparison discipline, it can survive the exclusion of less certain and reliable forensic science disciplines.
Finally, the NDAA’s hostile attitude toward reform suggests an emphasis on convictions and a belief that the criminal justice system’s current error rate is “good enough.” No longer can we deny that the system makes mistakes—wrongful convictions happen. At a minimum, we have more than 300 pieces of proof that the system isn’t perfect. Surely, any system that relies on human judgment (e.g., juror judgment) will make mistakes. It would be idealistic and naïve to hope that the criminal justice system would never, in practice, convict an innocent person or free a guilty person. However, settling for a system reliant upon unvalidated and flawed forensic science that holds such persuasive power over juries is antithetical to the concepts of justice and fairness.
Conclusion
While academics and some practitioners work to validate and better understand some forensic science disciplines, such as fingerprint identification, those in positions of power seem content to take a steadfast, obstructionist approach that will likely lead to further miscarriages of justice. Ultimately, the responses to the PCAST report from Attorney General Lynch, the NDAA, and others75 demonstrate a disturbing attitude towards justice and a lack of appreciation for the realities of the criminal justice system and the scope of the problems facing forensic science today. The PCAST report offered a number of suggestions for restructuring and reforming forensic science to ensure the scientific validity of forensic feature-comparison disciplines. Until law enforcement officials and forensic science organizations and practitioners are open to engaging in meaningful reform, little progress will be made and miscarriages of justice are likely to continue as a result of flawed and unvalidated forensic evidence. For now, the path forward for forensic science seems littered with obstacles and hazards.
Adam B. Shniderman is an Assistant Professor, Department of Criminal Justice, Texas Christian University. B.A., Amherst College, cum laude, Law, Jurisprudence, and Social Thought; Ph.D., University of California – Irvine, Criminology, Law and Society. The author would like to thank Simon Cole for his helpful comments on this article.
Preferred Citation: Adam B. Shniderman, Prosecutors Respond to Calls for Forensic Science Reform: More Sharks in Dirty Water, 126 Yale L.J. F. 348 (2016), http://www.yalelawjournal.org/forum/prosecutors-respond-to-calls -for-forensic-science-reform.
President’s Council of Advisors on Sci. & Tech., Forensic Science in Criminal Courts Ensuring Validity of Feature-Comparison Methods (Sept. 2016) [hereinafter President’s Council], http://http://www.whitehouse.gov/sites/default/files/microsites/ostp/PCAST/pcast_forensic_science_report_final.pdf [http://perma.cc/VRQ9-2DE4].
Nat’l Research Council, Strengthening Forensic Science in the United States: A Path Forward (2009) [hereinafter ’Strengthening Forensic Science], http://www.nap.edu/catalog/12589.html [http://perma.cc/PY6P-HPPT].
Kira Lerner, Attorney General To Ignore New Report Finding that Commonly-Used Forensics Are Bogus, Think Progress (Sept. 21, 2016), http://thinkprogress.org/attorney-general-to-ignore-new-report-finding-that-commonly-used-forensics-are-bogus-633a3b313a6a#.6o0cjodsc [http://perma.cc/W5VY-8L4Z].
Press Release, Nat’l Dist. Attorneys Ass’n, National District Attorneys Association Slams President’s Council of Advisors on Science and Technology Report (Sept. 2, 2016) http://http://www.ndaa.org/pdf/NDAA%20Press%20Release%20on%20PCAST%20Report.pdf [http://perma.cc/SZ3Y-FDKC].
While an ideal system would make zero errors, it is important to recognize that any system involving human judgment, particularly at as many stages as a criminal case proceeds through, will yield some errors. However, to the extent that forensic science plays an important role in criminal proceedings, errors produced by flawed and unvalidated evidence and testimony are unnecessary and could likely be significantly minimized by following the recommendations of experts, including those involved in the 2009 National Research Council Report or the 2016 President’s Council Report.
See DNA Exonerations Nationwide, Innocence Project (2016), http://www.innocenceproject.org/dna-exonerations-in-the-united-states [http://perma.cc/247J-M4TM].
Univ. of Mich. Law Sch., The First 1,600 Exonerations, Nat’l Registry Exonera-tions, http://http://www.law.umich.edu/special/exoneration/Documents/1600_Exonerations.pdf [http://perma.cc/A8CV-YSU7].
See Wrongful Convictions Involving Unvalidated or Improper Forensic Science That Were Later Overturned through DNA Testing, Innocence Project (2016), http://www.innocenceproject.org/wp-content/uploads/2016/02/DNA_Exonerations_Forensic_Science.pdf [http://perma.cc/GDL9-FBJS].
Id. See also; Simon A. Cole, Acculturating Forensic Science: What is ‘Scientific Culture’, And How Can Forensic Science Adopt It?, 38 Fordham Urb. L.J. 435 (2010) (arguing for regulation of DNA profiling, accreditation requirements for crime laboratories, standardization of written protocols, quality assurance programs, and forensic science commissions); Paul C. Giannelli, Wrongful Convictions and Forensic Science: The Need To Regulate Crime Labs, 66 N.C. L. Rev. 163 (2007); Jennifer E. Laurin, Remapping the Path Forward: Toward a Systemic View of Forensic Science Reform and Oversight, 91 Tex. L. Rev. 1051 (2012) (arguing that systemic problems affecting “upstream” use of forensic evidence have frustrated national efforts at evidence reform).
President’s Council, supra note 1, at 4 (“Foundational validity for a forensic-science method requires that It be shown, based on empirical studies, to be repeatable, reproducible, and accurate, at levels that have been measured and are appropriate to the intended application. Foundational validity, then, means that a method can, in principle, be reliable.”).
Id. at 5 (“Validity as applied means that the method has been reliably applied in practice. It is the scientific concept we mean to correspond to the legal requirement, in Rule 702(d), that an expert ‘has reliably applied the principles and methods to the facts of the case.’” (emphasis in original)).
Under Daubert, an admissible scientific technique should generally: 1) be falsifiable/testable; 2) have been subjected to peer review; 3) possess a known or potential error rate; 4) maintain standards and controls concerning the operation of the technique; and 5) be generally accepted by the relevant scientific community. Id. at 593-94.
Kate Berry, How Judicial Elections Impact Criminal Cases, Brennan Ctr. for Just. 1-2 (Dec. 2, 2015), http://http://www.brennancenter.org/sites/default/files/publications/How_Judicial_Elections_Impact_Criminal_Cases.pdf [http://perma.cc/LKL5-SSLN].
See, e.g., Rachel Dioso-Villa, Is the Expert Admissibility Game Fixed?: Judicial Gatekeeping of Fire and Arson Evidence, 38 L. & Pol’y 54, 75 (2016) (empirically finding a pro-prosecution bias based on an examination of fire and arson evidence); D. Michael Risinger, Navigating Expert Reliability: Are Criminal Standards of Certainty Being Left on the Dock?, 64 Alb. L. Rev. 99, 131-35 (2000) (examining federal appellate opinions and district court cases and finding evidence to support the theory that there is a systemic bias against criminal defendants in judges’ admissibility decisions); Adam B. Shniderman, You Can’t Handle the Truth: Lies, Damned Lies, and the Exclusion of Polygraph Evidence, 22 Alb. L.J. Sci. & Tech. 433, 469-70 (2012) (suggesting that the true reason for the exclusion of polygraph evidence is a systematic bias against defendants).
See, e.g., Affidavit of Simon Cole in Support of Motion to Exclude Testimony of Forensic Fingerprint Examiner and Request for a Daubert Hearing at 4, United States v. Rudolph, CR-00-S-422-S (N.D. Ala. Oct. 5, 2004), http://www.ncids.com/forensic/fingerprints/simon_cole_affidavit.pdf [http://perma.cc/SF68-82DR].
Indeed, Cole notes that he and other academics have been actively excluded from the reference frame when courts determine who constitutes the “relevant scientific community” for the purposes of assessing latent fingerprint’s general acceptance. In doing so, courts ignore a large number of academics who agree with Cole’s position that latent print identification is of dubious scientific validity and reliability. See, e.g., Simon A. Cole, Comment on “Scientific Validity of Fingerprint Evidence Under Daubert”, 7 L. Probability & Risk 119, 124 (2007).
See United States v. Rodriguez-Felix, 450 F.3d 1117, 1122-27 (10th Cir. 2006) (upholding the district court’s exclusion of expert testimony for lack of scientific reliability); United States v. Langan, 263 F.3d 613, 620-25 (6th Cir. 2001) (upholding the trial court’s exclusion of expert testimony on eyewitness identification for lack of scientific reliability).
United States v. Jones, 689 F.3d 12, 20 (1st Cir. 2012). Unfortunately, jury instructions appear to have little success in sensitizing jurors to the relevant issues in eyewitness testimony. Professor Edith Greene conducted a series of studies on the effect of instructions on jurors’ decisions. See Edith Greene, Judge’s Instruction on Eyewitness Testimony: Evaluation and Revision, 18 J. Applied Soc. Psych. 252 (1988). Instructions in her study were derived from United States v. Telfaire, 469 F.2d 552 (D.C. Cir. 1972), which instructs jurors to consider specific factors that may influence the reliability of eyewitness testimony. Telfaire, 469 F.2d at 558-59. The identification instructions did not help jurors distinguish between good and bad eyewitnesses. Greene, supra, at 274-75. Professor Greene’s findings have been replicated in subsequent studies.See, e.g., Brian H. Bornstein & Joseph A. Hamm, Jury Instructions on Witness Identification, 48 Ct. Rev. 48 (2012); Gabriella Ramirez et al., Judges’ Cautionary Instructions on Eyewitness Testimony, 14 Am. J. Forensic Psychol. 31 (1996). Ultimately, given the complexity of issues associated with forensic evidence, it seems unlikely that jury instructions would be sufficient to allow jurors to distinguish between good and bad scientific evidence. Furthermore, the fundamental flaws in many scientific disciplines discussed in the PCAST and NRC report are more appropriately considered with respect to admissibility (by a judge), rather than weight (by a jury), making instructions an inadequate and inappropriate means of safeguarding against an incorrect verdict.
Hannah Levintova, Jaeah Lee & Brett Brownell, Why You’re in Deep Trouble If You Can’t Afford a Lawyer, Mother Jones (May 6, 2013, 5:00 AM), http://www.motherjones.com/politics/2013/05/public-defenders-gideon-supreme-court-charts [http://perma.cc/5ZAK-SY3V].
In their article, Thompson and Ford noted several issues with forensic DNA typing at the time: claims about the certainty of DNA evidence were exaggerations; the technique had not yet been standardized, so there was no way to ensure that the work done by a specific laboratory complied with a generally accepted methodology; autoradiographs were more difficult to interpret, and bands more difficult to measure, than Cellmark and Lifecodes were willing to admit; there was a significant possibility of laboratory error, such as contamination or making an erroneous “call,” and of human error, such as sample mix-up, at several points during the DNA typing procedure; there were serious problems with the way that private laboratories calculated and presented statistical probabilities because of the reliance on unverified assumptions about populations and the independence of various alleles within them; and adequate validation and reliability studies had not been performed. See William C. Thompson & Simon Ford, DNA Typing: Acceptance and Weight of the New Genetic Identification Tests, 75 Va. L. Rev. 45 (1989).
Nat’l Research Council, DNA Technology in Forensic Science (1992), http://www.nap.edu/catalog/1866/dna-technology-in-forensic-science [http://perma.cc/S3ES-Y975].
While challenges were successful in several states, similar evidence was deemed admissible in federal courts. Successful challenges in the state court system largely relied on population genetics to challenge the “random match probability” provided with the evidence, which is an important component for understanding the value of a DNA match. See Gary Edmond, Simon Cole, Emma Cunliffe & Andrew Roberts, Admissibility Compared: The Reception of Incriminating Expert Evidence in Four Adversarial Jurisdictions, 3 Denv. Crim. L. Rev. 31 (2014).
See, e.g., Comments on: President’s Council of Advisors on Science and Technology Report to the President, Forensic Science in Federal Criminal Courts: Ensuring Scientific Validity of Pattern Comparison Methods, FBI (Sept. 20, 2016), http://www.fbi.gov/file-repository/fbi-pcast-response.pdf [http://perma.cc/VW3A-NUV5]; Position Statement, Am. Congress Forensic Sci. Laboratories (Sept. 21, 2016), http://forensicfoundations.com/resources/Documents_OPC/2016_0921_PS_PCAST.pdf [http://perma.cc/SLY6-66GY].