Hostname: page-component-848d4c4894-hfldf Total loading time: 0 Render date: 2024-05-18T01:18:39.464Z Has data issue: false hasContentIssue false

Forensic Science – Believe It or Not? Public Attitudes toward Forensic Evidence in Israel

Published online by Cambridge University Press:  19 April 2024

Naomi Kaplan-Damary*
Affiliation:
Institute of Criminology, Faculty of Law, The Hebrew University of Jerusalem, Jerusalem, Israel
Tal Jonathan-Zamir
Affiliation:
Institute of Criminology, Faculty of Law, The Hebrew University of Jerusalem, Jerusalem, Israel
Gali Perry
Affiliation:
Institute of Criminology, Faculty of Law, The Hebrew University of Jerusalem, Jerusalem, Israel
Eran Itskovich
Affiliation:
Institute of Criminology, Faculty of Law, The Hebrew University of Jerusalem, Jerusalem, Israel
*
Corresponding author: Naomi Kaplan-Damary; Email: naomi.kaplan@mail.huji.ac.il
Rights & Permissions [Opens in a new window]

Abstract

Forensic science is undergoing an unprecedented period of reform. Wrongful convictions and errors of impunity have been attributed largely to forensic evidence, and concerns over the scientific foundations of many forensic disciplines have been raised in key official reports. In these turbulent times, it becomes particularly interesting to understand how forensic evidence is understood by the general public. Is it idealized? Are its inherent limitations recognized? The present study seeks to contribute to this growing body of work by addressing two main questions: (1) How does the general public perceive forensic science?; (2) How correct are individuals in their evaluations of specific types of forensic evidence? A survey of the Israeli public reveals considerable trust in the ability of forensics to reliably identify the perpetrator of a crime, although less trust is expressed when questions lead respondents to consider specific stages in the forensic process. Furthermore, respondents were often incorrect in their evaluations of the reliability of specific types of forensic evidence. The implications of these findings for police legitimacy, the practice of the criminal justice system, and the future study of attitudes toward forensic evidence, are discussed.

Abstracto

ABSTRACTO

La ciencia forense está atravesando un período de reforma sin precedentes. Las condenas injustas y los errores de impunidad se han atribuido en gran medida a las pruebas forenses, y en informes oficiales clave se han planteado preocupaciones sobre los fundamentos científicos de muchas disciplinas forenses. En estos tiempos turbulentos, resulta particularmente interesante comprender cómo entiende el público en general la evidencia forense. ¿Está idealizado? ¿Se reconocen sus limitaciones inherentes? El presente estudio busca contribuir a este creciente cuerpo de trabajo abordando dos preguntas principales: (1) ¿Cómo percibe el público en general la ciencia forense?; (2) ¿Cuán correctas son las personas en sus evaluaciones de tipos específicos de evidencia forense? Una encuesta del público israelí revela una confianza considerable en la capacidad de los expertos forenses para identificar de manera confiable al autor de un delito, aunque se expresa menos confianza cuando las preguntas llevan a los encuestados a considerar etapas específicas del proceso forense. Además, los encuestados a menudo se equivocaron en sus evaluaciones de la confiabilidad de tipos específicos de evidencia forense. Se discuten las implicaciones de estos hallazgos para la legitimidad policial, la práctica del sistema de justicia penal y el estudio futuro de las actitudes hacia la evidencia forense.

Abstrait

ABSTRAIT

La médecine légale traverse une période de réforme sans précédent. Les condamnations injustifiées et les erreurs d’impunité ont été largement attribuées aux preuves médico-légales, et des inquiétudes quant aux fondements scientifiques de nombreuses disciplines médico-légales ont été soulevées dans des rapports officiels clés. En ces temps de turbulences, il devient particulièrement intéressant de comprendre comment les preuves médico-légales sont perçues par le grand public. Est-ce idéalisé ? Ses limites inhérentes sont-elles reconnues ? La présente étude cherche à contribuer à ce corpus de travaux croissant en abordant deux questions principales : (1) Comment le grand public perçoit-il la science médico-légale ? ; (2) Dans quelle mesure les individus ont-ils raison dans leurs évaluations de types spécifiques de preuves médico-légales ? Une enquête menée auprès du public israélien révèle une confiance considérable dans la capacité des médecins légistes à identifier de manière fiable l’auteur d’un crime, même si cette confiance est moindre lorsque les questions amènent les personnes interrogées à considérer des étapes spécifiques du processus médico-légal. En outre, les répondants se trompaient souvent dans leurs évaluations de la fiabilité de types spécifiques de preuves médico-légales. Les implications de ces résultats pour la légitimité de la police, la pratique du système de justice pénale et l'étude future des attitudes à l'égard des preuves médico-légales sont discutées.

抽象的

抽象的

法医学正在经历前所未有的改革时期。 错判和有罪不罚的错误很大程度上归因于法医证据,重要的官方报告也提出了对许多法医学科的科学基础的担忧。 在这个动荡的时代,了解公众如何理解法医证据变得特别有趣。 是不是理想化了? 是否认识到其固有的局限性? 本研究旨在通过解决两个主要问题来为这一不断增长的工作做出贡献:1)公众如何看待法医学? 2) 个人对特定类型法医证据的评估有多正确? 对以色列公众的一项调查显示,他们对法医可靠识别犯罪者的能力有相当大的信任,尽管当问题引导受访者考虑法医过程的具体阶段时,他们的信任度较低。 此外,受访者对特定类型法医证据可靠性的评估常常是错误的。 讨论了这些发现对警察合法性、刑事司法系统实践以及对法医证据态度的未来研究的影响。

خلاصة

خلاصة

يمر علم الطب الشرعي بفترة إصلاح غير مسبوقة. تُعزى الإدانات الخاطئة والأخطاء المرتبطة بالإفلات من العقاب إلى حد كبير إلى أدلة الطب الشرعي، وقد أثيرت مخاوف بشأن الأسس العلمية للعديد من تخصصات الطب الشرعي في التقارير الرسمية الرئيسية. في هذه الأوقات المضطربة، يصبح من المثير للاهتمام بشكل خاص فهم كيفية فهم عامة الناس لأدلة الطب الشرعي. هل هي مثالية؟ هل تم الاعتراف بالقيود المتأصلة فيها؟ تسعى الدراسة الحالية إلى المساهمة في هذا الكم المتنامي من العمل من خلال معالجة سؤالين رئيسيين: 1) كيف ينظر عامة الناس إلى علم الطب الشرعي؟ 2) ما مدى صحة الأفراد في تقييماتهم لأنواع محددة من الأدلة الجنائية؟ يكشف استطلاع للرأي العام الإسرائيلي عن ثقة كبيرة في قدرة الطب الشرعي على تحديد هوية مرتكب الجريمة بشكل موثوق، على الرغم من أن الثقة أقل عندما تدفع الأسئلة المشاركين إلى النظر في مراحل محددة في عملية الطب الشرعي. علاوة على ذلك، كان المستجيبون في كثير من الأحيان مخطئين في تقييماتهم لموثوقية أنواع معينة من الأدلة الجنائية. وتناقش آثار هذه النتائج على شرعية الشرطة، وممارسة نظام العدالة الجنائية، والدراسة المستقبلية للمواقف تجاه أدلة الطب الشرعي.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© International Society of Criminology, 2024

INTRODUCTION

Forensic science can be defined as the application of scientific methods and processes to solve crimes, often through the comparison of physical evidence gathered at a crime scene to that obtained from a suspect (Eckert Reference Eckert1996; Houck and Siegel Reference Houck and Siegel2009; Saferstein Reference Saferstein2017). For most of the twentieth century, forensic methods were perceived to be trustworthy and reliable by courts, attorneys, jurors and the general public. As stated by Mnookin et al. (Reference Mnookin, Cole, Dror, Fisher, Houck, Keith Inman, Kaye, Koehler, Michael Risinger, Rudin, Siegel and Stoney2011, 726):

Long-used types of forensic science – fingerprint examination, handwriting analysis, firearms and toolmark comparison, and other forms of pattern and impression evidence – are mainstays of criminal prosecution. For roughly a hundred years, these comparison and identification methods have regularly and routinely been employed as legal evidence. For most of that period, courts, attorneys, jurors, and the public, as well as forensic analysts themselves, have largely accepted this evidence as trustworthy and uncontroversial.

The idealization of forensic science changed radically in the early 2000s, when forensic methods came under growing public scrutiny and criticism. Professional carelessness, cognitive biases or downright incompetence have raised doubts about the trustworthiness of findings at several forensic laboratories (Mnookin et al. Reference Mnookin, Cole, Dror, Fisher, Houck, Keith Inman, Kaye, Koehler, Michael Risinger, Rudin, Siegel and Stoney2011). For example, in 2004, several senior Federal Bureau of Investigation (FBI) examiners mistakenly linked a fingerprint connected with the Madrid train bombing to Brandon Mayfield, an American lawyer and a convert to Islam (Mnookin et al. Reference Mnookin, Cole, Dror, Fisher, Houck, Keith Inman, Kaye, Koehler, Michael Risinger, Rudin, Siegel and Stoney2011). One of the greatest challenges to forensic practices has emerged from within forensic science itself, in the wake of advances in DNA analysis. Studies of wrongful convictions based on DNA findings revealed that errors in forensic tests constitute the second most important factor associated with such convictions (Saks and Koehler Reference Saks and Koehler2005; Innocence Project 2023). Concerns over the scientific foundations of many forensic disciplines were raised in two major official reports: the report of the Committee on Identifying the Needs of the Forensic Sciences Community, National Research Council (2009), and that of the President’s Council of Advisors on Science and Technology (2016).

In light of these turbulent times and the shift in policy makers’ approach to forensic science, it may well be asked how the general public perceives forensic evidence. Are ordinary citizens overcome by idealized views regarding the accuracy of forensic sciences (as might be expected given the surge of police investigation television shows; Cole and Dioso-Villa Reference Cole and Dioso-Villa2006; Schweitzer and Saks Reference Schweitzer and Saks2007), or have they become aware of the growing criticism and recognition of the field’s limitations? Moreover, how accurate is the public in its evaluation of forensic sciences compared to the scientific assessment of these fields? Given the inherent connection between forensic science and the police (police experts gather forensic evidence and criminal investigators/detectives use it to build criminal cases; e.g. Williams Reference Williams and Newburn2008), such evaluations can be seen as an expression of “police legitimacy”, a concept that has figured prominently in policing scholarship in recent decades (e.g. Nagin and Telep Reference Nagin and Telep2017, Reference Nagin and Telep2020; Weisburd and Majmundar Reference Weisburd and Majmundar2018; Tyler and Nobo Reference Tyler and Nobo2023). Furthermore, from the point of view of policy and practice, inaccurate perceptions (and particularly unfounded idealization of forensic evidence) could influence jury decisions (e.g. Winter and Greene Reference Winter, Greene, Durso, Nickerson, Dumais, Lewandowsky and Perfect2007) and/or give rise to significant public pressure on the criminal justice system regarding particular cases (Roberts Reference Roberts2018). Thus, the present work seeks to contribute to the growing body of research on public attitudes toward forensic science, focusing specifically on the three types of attitudes examined in previous research: general assessments of forensic science, views regarding the particular stages of the forensic process (including perceived error and degree of human judgement) and evaluations of particular forensic methods (e.g. DNA, fingerprints, bite marks).

The present article begins with an explanation of the importance of public views of forensic science, followed by a summary of the available studies on the matter. We then review the research on the accuracy of specific forensic methods, and provide a summary of the studies that have examined the general public’s understanding of the accuracy of these methods. This is followed by a description of the methodology of the present study (a community survey carried out in Israel) and an examination of the research findings. These reveal a strong public belief in the ability of forensic science as a whole, as well as specific forensic disciplines, to reliably identify the perpetrator of a crime, although less trust is expressed when questions lead respondents to consider the specific stages of the forensic process. In addition, while public views regarding the various forensic techniques appear to correspond with more objective and systematic assessments in the cases of DNA and fingerprint analysis, there is considerable divergence between public assessments and scientific findings concerning the other forensic disciplines. The implications of these findings are discussed, including their relationship to police legitimacy; the importance of measuring specific (rather than generalized) views of forensic evidence; the need to consider attitudes toward forensic evidence in the context of populism (the effort to advance policies that are likely to win public support; e.g. Roberts et al. Reference Roberts, Stalans, Indermaur and Hough2002) and jury selection in order to reduce bias during trials; and the need to include detailed explanations and data indicating the reliability of different forensic techniques as part of the evidence presented to the court and the media.

The Study of Public Attitudes toward Forensic Evidence

Why are Public Attitudes toward Forensic Evidence Important?

Taking a bird’s-eye view, public attitudes toward forensic evidence are important because they are intrinsically linked to “police legitimacy”, an idea that has attracted much attention in recent decades from police practitioners and researchers, policy makers and the general public (e.g. Nagin and Telep Reference Nagin and Telep2017, Reference Nagin and Telep2020; Weisburd and Majmundar Reference Weisburd and Majmundar2018; Tyler and Nobo Reference Tyler and Nobo2023). This considerable interest stems from the fact that a large body of work revealed a strong connection between citizens’ perceptions of the police as trustworthy and legitimate, and various socially desirable outcomes. These include, for example, compliance with police requests in interpersonal interactions (Tyler and Huo Reference Tyler and Huo2002), in enforcement contexts (Dickson, Gordon, and Huber Reference Dickson, Gordon and Huber2022) and in routine, daily behaviours (Sunshine and Tyler Reference Sunshine and Tyler2003; Tyler and Fagan Reference Tyler and Fagan2008; Tyler and Jackson Reference Tyler and Jackson2014), as well as a willingness to cooperate with law enforcement authorities (Mazerolle et al. Reference Mazerolle, Bennett, Antrobus and Eggins2012; Wolfe et al. Reference Wolfe, Nix, Kaminski and Rojek2016), and actual cooperation with them (Mastrofski Reference Mastrofski and Hoover1996; Dai, Frank, and Sun Reference Dai, Frank and Sun2011; Mazerolle et al. Reference Mazerolle, Sarah Bennett, Sargeant and Manning2013; Desmond, Papachristos, and Kirk Reference Desmond, Papachristos and Kirk2016; for a meta-analysis, see Walters and Bolger Reference Walters and Colin Bolger2019).

Views of forensic science, in turn, can be seen as one (of several) expressions of police legitimacy. Trust in forensics indicates trust in a salient technique used by the police, and perhaps in their professionalism more generally. But no less important is the question of whether this trust is “deserved”, that is whether public beliefs about these specific techniques are correct (Kaplan, Ling, and Cuellar Reference Kaplan, Ling and Cuellar2020). It is important to emphasize that the present study makes no pretence to incorporate public views of forensic evidence into the well-established model of police legitimacy (as a component, predictor or outcome of legitimacy; e.g. Tyler and Nobo Reference Tyler and Nobo2023). Alternatively, in framing the study and situating it in the literature on public attitudes toward the police, we take a broad perspective and consider the way a significant element of police work (forensics) is viewed by the public as inevitably and intrinsically linked to the popular legitimacy of law enforcement authorities.

Focusing more specifically on policy and practice considerations, public attitudes toward forensic science in countries that use a jury system are important because they may influence jury decisions of guilt or innocence (Winter and Greene Reference Winter, Greene, Durso, Nickerson, Dumais, Lewandowsky and Perfect2007). Kaplan et al. (Reference Kaplan, Ling and Cuellar2020, 271–2) note that as the use of forensic science becomes more frequent in criminal cases that come before a jury, it is increasingly important to understand jurors’ preconceptions about forensic investigation in order to minimize biases during proceedings; Kim, Barak, and Shelton (Reference Kim, Barak and Shelton2009) have demonstrated that pre-trial juror expectations of forensic science may indeed affect jury decisions. Moreover, in her review of the literature, Eldridge (Reference Eldridge2019, 32) concluded that jurors:

… often undervalue evidence, particularly if it is in a discipline that they may have previously considered to be less discriminating. They do not understand numerical testimony well, although they may prefer to hear it, and they vary widely in their interpretation of verbal expressions …

In such constellations, general attitudes and biases concerning forensic evidence could make a heavy impact on jury decision-making. Indeed, prevailing cognitive models of juror decision-making (Pennington and Hastie Reference Pennington and Hastie1981, Reference Pennington and Hastie1986, Reference Pennington and Hastie1988, Reference Pennington, Hastie and Hastie1993; Chen and Chaiken Reference Chen, Chaiken, Chaiken and Trope1999) assert that the jurors’ knowledge, expectations, attitudes and motivation influence their evaluation of the evidence and determination of a verdict (Winter and Greene Reference Winter, Greene, Durso, Nickerson, Dumais, Lewandowsky and Perfect2007).

It is also important to note that public views may affect decision-making in the criminal justice system (Casillas, Enns, and Wohlfarth Reference Casillas, Enns and Wohlfarth2011; Roberts Reference Roberts2018). A useful example is the case of Joan Little, an African American women who was charged with first-degree murder in 1974 after stabbing a white prison guard who sexually assaulted her at North Carolina’s Beaufort County jail. Black women and men protested on her behalf, and a committee in her name raised hundreds of thousands of dollars for her bond and legal fees (Black Women’s Blueprint, Inc. 2019). Greene (Reference Greene2015, 428) argued that the “well-oiled defense fund” and “broad-based ‘Free Joan Little’ campaign” were instrumental in winning an acquittal. In her view, “without the funds and the activists’ support, Little might well have received a death sentence”. A more recent example can be found in Israel (the site of the present study), which relies on bench trials held before professional judges. In the context of the debate over reopening the murder trial of Zadorov in 2018, the Israel State Attorney stated:

We have no intention of changing our position because of public pressure from one sector or another. Our responsibility is to follow the evidence. That is how we behaved in the past and that is how we will act in the future (Dolev Reference Dolev2018).

This statement expressed the State Attorney’s commitment to refrain from making decisions on the basis of public pressure, but the need to explicitly state this reflects recognition of the strong, potential effects of public pressure on the criminal justice system.

Such potential for public influence echoes the concept of penal populism – the promotion of a policy not because it was found to be effective in achieving its goals, but because of its expected political benefits. More specifically, penal populism refers to the effort to advance penal policies because they are more likely to garner votes or win public support, than policies that would effectively reduce crime or encourage truth-finding in the criminal justice system, but are less popular (e.g. Roberts et al. Reference Roberts, Stalans, Indermaur and Hough2002). Similarly, recent studies have shown that public opinion favouring more severe punishment (“public punitiveness”) has had considerable impact on court decision-making in the USA (Pickett Reference Pickett2019). As noted by Roberts (Reference Roberts2018, 197, citing US Department of Justice 1987):

Many sentencing commissions and judges are to some degree affected by public pressure to make sentences harsher. The Chairman of the U.S. Federal Sentencing Guidelines Commission has stated that: “Public input has played a pivotal role in the formulation of the sentencing guidelines.”

Public Attitudes toward Forensic Evidence: What Do We Already Know?

To date, numerous studies have focused on the attitudes of the general public toward forensic science within the context of the so-called “CSI [crime scene investigation] effect” (Cole and Dioso-Villa Reference Cole and Dioso-Villa2006; Schweitzer and Saks Reference Schweitzer and Saks2007). This effect was named after the popular television programme CSI and its spin-offs, which bring forensic sciences into the spotlight and glorify their effectiveness, thus potentially creating unrealistic expectations regarding their utility and accuracy. These expectations, in turn, place a burden on the prosecution to bring forensic evidence indicating the suspect’s guilt (i.e. if there is no damning forensic evidence, the accused person must be innocent). Alternatively, they may create an almost blind faith in the reliability of forensic practices, which places a burden on the defence (i.e. if there is damning forensic evidence, the accused person must be guilty; Cole and Dioso-Villa Reference Cole and Dioso-Villa2006; Schweitzer and Saks Reference Schweitzer and Saks2007).

Thus far, studies have failed to establish a consistent relationship between viewing CSI and jurors’ verdict decisions (Podlas Reference Podlas2006; Shelton, Kim, and Barak Reference Shelton, Kim and Barak2006; Schweitzer and Saks Reference Schweitzer and Saks2007; Kim et al. Reference Kim, Barak and Shelton2009), and recent studies have found no evidence for such an effect at all (Klentz, Winters, and Chapman Reference Klentz, Winters and Chapman2020; Lodge and Zloteanu Reference Lodge and Zloteanu2020). Whatever the case may be, by focusing on direct exposure to forensic crime television dramas, this body of work does not account for the possible effects of other sources of forensic information, such as different crime-related programmes, crime fiction literature or media coverage of forensic issues. Moreover, this work focuses on jury verdicts. It does not examine whether watching CSI affects attitudes toward forensic science, which may, in turn, have an impact on juror decisions (Smith and Bull Reference Smith and Bull2012). Therefore, given the possibility that pre-trial juror expectations of forensic science do in fact affect jury decisions (see Kim et al. Reference Kim, Barak and Shelton2009), Smith and Bull (Reference Smith and Bull2012, Reference Smith and Bull2014) developed a scale to measure pre-trial bias regarding forensic evidence, and confirmed the existence of “pro-defence” and “pro-prosecution” biases. Their study found that the British public placed only a moderate degree of trust in forensic science (slightly less than 2.5 on a scale of 5).

A second, related body of work focused more specifically on the different stages of the forensic process. Ribeiro, Tangen, and McKimmie (Reference Ribeiro, Tangen and McKimmie2019) examined perceptions of the likelihood of error and degree of human judgement involved at various stages of the forensic process (i.e. collection, storage, testing, analysis, reporting and presenting). They found that the forensics process was perceived to involve a considerable degree of human judgement and to be relatively error prone. Subsequently, Kaplan et al. (Reference Kaplan, Ling and Cuellar2020), using a modified version of the instrument developed in Ribeiro et al. (Reference Ribeiro, Tangen and McKimmie2019), found that people held a pessimistic view of forensic investigations, believing that an error could occur roughly half the time at each stage.

The Accuracy of Forensic Disciplines: Reality v. Public Perceptions

How Reliable Are Forensic Techniques?

As already noted, for most of the past century, forensic methods were considered trustworthy and reliable. However, the Committee on Identifying the Needs of the Forensic Sciences Community, National Research Council (2009) report Strengthening Forensic Science in the United States: A Path Forward and the President’s Council of Advisors on Science and Technology (PCAST) report (President’s Council of Advisors on Science and Technology 2016) both found that the scientific foundations of many forensic disciplines were insufficient. PCAST evaluated specific forensic methods in order to determine whether they had been scientifically established as valid and reliable. The report reviewed research on seven forensic disciplines (DNA single-source and simple mixture, DNA complex mixture, bite marks, fingerprints, firearms, footwear and hair), and concluded that only two (DNA analysis of single-source and simple mixture samples and latent fingerprint analysis) were foundationally valid, meaning that they were repeatable, reproducible and accurate. With regard to the other five disciplines, some were not found to be foundationally valid, while others had not yet been subjected to objective research.

Regarding shoeprints, PCAST concluded that the foundational validity of footwear analysis associating shoeprints with particular shoes on the basis of specific identifying marks (known as “randomly acquired characteristics”) lacked sufficient support in scientific research (President’s Council of Advisors on Science and Technology 2016, 117). Bite mark analysis was dismissed as falling far short of the standards required for foundational validity (President’s Council of Advisors on Science and Technology 2016, 87), with scientific evidence suggesting that examiners could not consistently or accurately agree on the identification or source of a possible bite mark. PCAST quoted the US Department of Justice (1987) guidelines regarding hair analysis, which noted that microscopic hair comparisons were insufficient for personal identification (President’s Council of Advisors on Science and Technology 2016, 13). To date, no single review has provided a definitive answer about the foundational validity of techniques that were not included in the PCAST report (Kaplan et al. Reference Kaplan, Ling and Cuellar2020).

As stated by PCAST, “the foundational validity of a subjective method can only be established through multiple, appropriately designed black-box studies” (President’s Council of Advisors on Science and Technology 2016, 9). Black-box studies treat examiners as decision-making “black boxes” and measure the accuracy of their conclusions without considering how they arrived at their decisions. In order to evaluate performance, examiner’s decisions in specific cases are compared to the correct answers (of which they are of course unaware). Such studies are important for learning about the overall error (misclassification) rate of a forensic discipline. Recent black-box studies evaluating such errors in various forensic fields are summarized in Table 1, which also shows the empirical error rates in these practices. Table 1 reveals that the error rates of fingerprints and footwear disciplines are relatively close and are lower than handwriting and bloodstain pattern analyses. Hopefully, studies of error rates in additional disciplines will provide a more complete picture.

Table 1. Error Rates in Selected Forensic Disciplines a

a False positive = incorrect decision that two samples from different sources originate from the same source; false negative = incorrect decision that two samples from the same source originate from different sources.

In sum, on the basis of PCAST and recent empirical research, certain forensic disciplines are regarded as highly reliable, such as DNA analysis and fingerprints, while others are highly questionable, such as bite mark and hair analysis. With regard to most other forensic disciplines (including footwear analysis), the degree of accuracy and reliability is still the subject of ongoing study, with promising results in some disciplines. But what does the public think about the reliability of different forensic methods?

How Does the Public Perceive the Accuracy of Forensic Science?

There appears to be considerable agreement across several studies that DNA analysis and fingerprint analysis are perceived by the general public as highly reliable techniques, with bite mark and hair analysis coming in second and shoeprints seen as considerably less reliable (Lieberman et al. Reference Lieberman, Carrell, Miethe and Krauss2008; Koehler Reference Koehler2017; Martire et al. Reference Martire, Ballantyne, Agnes Bali, Kemp and Found2019; Ribeiro et al. Reference Ribeiro, Tangen and McKimmie2019; Kaplan et al. Reference Kaplan, Ling and Cuellar2020). On the whole, non-forensic evidence is viewed as less trustworthy than most types of forensic analysis (Lieberman et al. Reference Lieberman, Carrell, Miethe and Krauss2008; Martire et al. Reference Martire, Ballantyne, Agnes Bali, Kemp and Found2019). A comparison of these perceptions with the scientific evaluation carried out as part of the PCAST report suggests that the public is well informed with regard to DNA and fingerprints. At the same time, there are broad misconceptions regarding bite mark analysis, which ranks third among the general public despite the fact that it has been scientifically discredited. Shoeprint analysis is perceived to be the least trustworthy discipline, in spite of PCAST’s conclusion that more research is necessary before determining its status. The findings of five studies (Lieberman et al. Reference Lieberman, Carrell, Miethe and Krauss2008; Koehler Reference Koehler2017; Martire et al. Reference Martire, Ballantyne, Agnes Bali, Kemp and Found2019; Ribeiro et al. Reference Ribeiro, Tangen and McKimmie2019; Kaplan et al. Reference Kaplan, Ling and Cuellar2020) concerning the forensic disciplines evaluated by PCAST, as well as non-forensic disciplines considered in these studies (aside from alibi which was not examined in any of the studies), are shown in Table 2. For the sake of clarity, findings regarding forensic disciplines not included in the PCAST report have been omitted, as has information regarding firearms, a discipline included in the report but not examined in this study.

Table 2. Actual and Perceived Reliability of Different Forensic Techniques

To summarize, we are not the first to grapple with questions concerning public attitudes toward forensic evidence. At the same time, this is still a very small body of work, originating primarily from English-speaking countries, clearly leaving much room for contribution. Moreover, we are unaware of studies that have examined all three relevant types of public attitudes toward forensic science within a single questionnaire (general evaluations, assessments of specific stages in the process of forensic investigation, attitudes toward specific forensic disciplines), making the comparison of responses across the different types extremely difficult. This, in turn, impedes our ability to conclude regarding similarities/differences between generalized and more focused views, and more broadly – to come to an integrative answer to the question “how does the public perceive forensic evidence?”

Furthermore, until the publication of the PCAST report (President’s Council of Advisors on Science and Technology 2016), it was not possible to compare public attitudes to objective scientific assessments of various forensic disciplines. In other words, it was not possible to answer the question “how accurate are citizens in their evaluations of specific forensic techniques?” Since 2016, this has only been done by Kaplan et al. (Reference Kaplan, Ling and Cuellar2020), and thus additional investigation is clearly warranted. Accordingly, the present study sets out to examine the three main types of public attitudes toward forensics identified in earlier work: (1) general assessments of the ability of forensic science to solve crime, (2) the degree to which each stage of the forensic process is prone to error, and the extent of human judgement involved; (3) the degree to which various forensic disciplines are error prone, while comparing these views to scientific evidence on the reliability of the different forensic disciplines. Finally, it should be noted that the present study relies on a relatively large population sample from a non-English-speaking country, thus expanding the context of this body of work.

RESEARCH DESIGN AND METHODS

Study Context

As noted above, the present study relies on data collected in Israel. It is thus important to provide some background on how forensic science is organized and handled in the Israeli criminal justice system. Most forensic investigations in Israel are carried out by the Israel Police Division of Identification and Forensic Science (DIFS), while certain areas (as suggested by their names) are handled by the National Center of Forensic MedicineFootnote 1 and the Clinical Toxicology and Pharmacology LaboratoryFootnote 2 (Public Committee for the Prevention of False Convictions and their Correction 2021). DIFS is an internationally recognized body and an active member of the European Network of Forensic Science Institutes.

A second important characteristic of the Israeli context concerns the judicial system, which relies on bench trials before professional judges. There are no juries in Israel; professionally trained judges handle all aspects regarding the administration of justice. Unlike the judiciary in the United States, Israeli judges not only make decisions on legal issues, but operate as factfinders as well. Serious criminal cases are tried by a panel of three justices. In addition to the verdict, Israeli judges provide detailed opinions that can be passed on to superior courts (Straschnov Reference Straschnov1999). Despite the lack of a jury system in the country, public attitudes toward forensic evidence in non-jury systems may bear important implications in terms of public pressure on the criminal justice system (Casillas et al. Reference Casillas, Enns and Wohlfarth2011; Roberts Reference Roberts2018).

To contextualize the findings, the highly publicized Zadorov case should be noted (e.g. Hovel 2013; Yanko and Raved Reference Yanko and Raved2022; Starr and Silkoff Reference Starr and Silkoff2023) in which a young girl was murdered in the town of Katzrin in 2006. Roman Zadorov was convicted in 2010 and retried in 2021 after the Supreme Court found that there was sufficient reasonable doubt about his guilt. The 2010 conviction was based partly on prints found at the crime scene that seemed to match his shoes. However, in an unrelated case in December 2013, the Supreme Court ruled that such prints, although admissible as evidence, were problematic and thus of limited value (Hovel 2013). During the retrial (July 2021–March 2023), there was a considerable debate about inconclusive results of a mitochondrial DNA test conducted on hair found at the crime scene, which reignited speculation that Zadorov was innocent (Starr and Silkoff Reference Starr and Silkoff2023). During the autumn of 2022, shortly before the present study was conducted, the debate concerning the mitochondrial DNA test attracted considerable media attention (e.g. Yanko and Raved Reference Yanko and Raved2022), raising questions about the significance of mitochondrial DNA taken from hair and reviving the earlier debate over the trustworthiness of footwear evidence. Thus, this case and its significant publicity may well have drawn public attention to forensic evidence, and specifically to hair analysis and footwear evidence, making the topic particularly salient among the general public.

Sampling and Participants

On 13 September 2022, an online survey measuring the attitudes of the Israeli public toward forensic evidence (see below) was conducted, using the services of “Midgam Project Web Panel”, a survey platform frequently used by social scientists in Israel. Through its website, people sign up to participate in surveys for a fee. (A Google Scholar search carried out on 1 December 2022 turned up over 170 studies that used this platform in the past five years; e.g. Weimann-Saks, Peleg-Koriat, and Halperin Reference Weimann-Saks, Peleg-Koriat and Halperin2019; Peleg-Koriat and Klar-Chalamish Reference Peleg-Koriat and Klar-Chalamish2020.) Midgam provides samples that represent the adult population in Israel in terms of gender and age, based on data provided by the Israeli Central Bureau of Statistics (CBS). Unfortunately, national or religious minorities (e.g. Arabs, Druze) are not adequately represented in online survey platforms (e.g. Scherpenzeel and Bethlehem Reference Scherpenzeel, Bethlehem, Das, Ester and Kaczmirek2010), and thus the sample focuses on the adult Jewish population.Footnote 3

The initial sample included 1,507 participants. A reduction of potential carelessly invalid responses was conducted, using some of the detection methods articulated by Curran (Reference Curran2016). Huang et al. (Reference Huang, Curran, Keeney, Poposki and DeShon2012) have suggested a conservative average cut-off time of 2 seconds per question, under which a response would be discarded as being careless. The current study has taken an even more conservative view and doubled this to 4 seconds per question, or 4.8 minutes for the entire survey. A second technique suggested by Curran (Reference Curran2016) examines strings of consistent responses. Currently there is no rule of thumb regarding the optimal length of such streams and various factors may be taken into consideration, such as the number of questions, the size of the scale and the nature of the question (Huang et al. Reference Huang, Curran, Keeney, Poposki and DeShon2012). Given the number of questions (72) and the five-point Likert scale, it was decided to set the cut-off string length at 20, so that any response with 20 or more consistent responses would be excluded. Additionally, it was decided to exclude participants who checked “don’t know/irrelevant” in half or more of the questions. In total, 165 participants were excluded. This left a final sample of 1,342 participants, which are similar overall in their sociodemographic characteristics to the population from which they were drawn (see Table 3).Footnote 4 A sensitivity analysis was conducted to evaluate the effect of excluding participants based on the chosen criteria by repeating the analysis on the entire sample without any exclusions. This analysis reveals that the exclusion did not affect the findings in a significant way.

Table 3. Sample Characteristics a

a The population data were obtained from the Israeli Central Bureau of Statistics in 2020.

The Survey Instrument

The questionnaire began with a consent form explaining the purpose of the study, along with its expected duration and identity of the researchers. Participants were notified that the survey is anonymous and voluntary, and that they may cease participation at any time and for any reason. Subsequently, they were presented with 72 survey items.Footnote 5 Most items were phrased as statements, which respondents were asked to rank on a five-point Likert scale (1 = “completely disagree” to 5 = “completely agree” in some sections; 1 = “none at all” to 5 = “high degree” in others). The questions used in this study appear in Appendix 1.

The statements enquired about participants’ general views of forensic science (based on the instrument developed by Smith and Bull Reference Smith and Bull2012, Reference Smith and Bull2014). Additionally (based on the questions asked by Ribeiro et al. Reference Ribeiro, Tangen and McKimmie2019), participants were asked to estimate the likelihood of error and degree of human judgement at each stage of the forensic investigation (e.g. To what extent is the collection process error prone? To what extent does the collection process rely on the individual judgement of the person collecting the evidence?), as well as the likelihood of error in the use of various types of forensic techniques (e.g. To what extent is shoeprint evidence error prone?), in addition to other types of evidence often presented at trials (e.g. eye witness testimonies, alibis and confessions). Data about participants’ personal characteristics, such as gender, age and education, were obtained from “Midgam”.

Analytical Strategy

In line with the research questions and past research in this area, the present study is a descriptive one. Four statements that capture broad views of forensic evidence are presented at the outset. Then, respondents’ views concerning specific stages of the forensic process are examined (the likelihood of error and degree of human judgement at each stage), followed by their views regarding the accuracy of different types of evidence, both forensic (five types) and non-forensic (three types).

RESULTS

General Views of Forensic Evidence

Respondents’ general views of forensic science are presented in Figure 1. The percentage of participants per response (1 = “completely disagree” to 5 = “completely agree”) is noted, along with the mean level of agreement and standard deviation for each statement. As can be seen from Figure 1, a large share of the respondents agree or completely agree (responses 4 and 5) with the first three statements. Specifically, 51% agree or completely agree that “Every crime can be solved with forensic science” compared to 24% who disagree or completely disagree (responses 1 and 2). Approximately 64% agree/completely agree that “Science is the most reliable way to identify perpetrators” compared to 13.5% who disagree/completely disagree, and 46.7% agree/completely agree that “Forensics always identifies the guilty person” compared to 22.6% who disagree/completely disagree. Respondents exhibited a more balanced reaction to the statement “Forensic evidence always provides a conclusive answer”, with 33.6% agreeing or completely agreeing with the statement, and 35.2% disagreeing or completely disagreeing. Nevertheless, taken together, responses to these statements appear to indicate broad trust in the ability of forensic science to reliably identify the perpetrator of a crime. Moreover, Table 4, which shows a comparison between the current findings and those of Smith and Bull (Reference Smith and Bull2012), reveals that the participants of the current study exhibit a considerably higher degree of trust in forensic science (although a larger variability in responses) than those surveyed by Smith and Bull (Reference Smith and Bull2012).

Figure 1. General attitudes toward forensic science (n = 1,342). Histograms show the level of agreement with four statements reflecting general attitudes toward forensics. The statement is noted on the top of each histogram, along with the percentage of participants at each level of agreement, ranging from 1 = “none at all” to 5 = “high degree”. The means and standard deviations (SD) are noted at the top left of each histogram.

Table 4. Current Findings v. Smith and Bull (Reference Smith and Bull2012)

*** p ≤ 0.001.

Views of Specific Stages in the Forensic Process

Participants were asked to provide their opinions on the degree of error, as well as the extent of human judgement, at various stages in the forensic process of a criminal investigation. Table 5 presents the degree to which respondents believe that the forensic process is prone to error. It reveals that a relatively small share of the respondents (some 15% to 28%) believes that the different stages of the forensic process involve little or no error (responses 1 and 2). A much larger share (about 35% to 46%) believes that the process involves substantial error (responses 4 and 5). The “examination, analysis and interpretation” stage is perceived as most prone to error, and the “presentation” stage as the least. Taken together, these responses reflect less trust in forensic science, or, put differently, more sophisticated, less idealized views than those expressed in response to the generalized questions about forensics reviewed above.

Table 5. Perceived Level of Error at Each Stage of the Forensic Process (n = 1,342) a

a Percentage of participants per level of perceived error (1 = “none at all” to 5 = “high degree”), for each stage of the forensic process.

b Within-subjects analysis of variance was conducted resulting in a p value ≤ 0.001. The p value was validated using a permutation test which is robust to normality and sphericity assumptions. For more details see Welch (Reference Welch1990).

c Statistically significant post hoc comparisons (using Bonferroni adjustment): Reporting v. all other stages and Presentation v. all other stages.

Regarding the role of human judgement, Table 6 reveals that respondents believe that a high level of human judgement is part of each stage of the forensic process. The percentage of participants who chose responses 4 and 5 (which indicate a high degree of human judgement) ranges from about 46% to 54%, with the exception of “storage”, which is perceived by 37.7% to involve a high degree of human judgement. Thus, similar to the responses noted above regarding the perceived degree of error (but in contrast to general views of forensic evidence), these responses suggest recognition of the imperfections of the forensic process, or, in other words, less trust in the process.

Table 6. Perceived Level of Human Judgement at Each Stage of the Forensic Process (n = 1,342) a

a Percentage of participants per perceived level of human judgement (1 = “none at all” to 5 = “high degree”), for each stage of the forensic process.

b Within-subjects analysis of variance was conducted resulting in a p value ≤ 0.001. The p value was validated using a permutation test which is robust to normality and sphericity assumptions. For more details, see Welch (Reference Welch1990).

c All post hoc comparisons (using Bonferroni adjustment) are statistically significant except for Examination, analysis and interpretation v. Reporting, and Examination, analysis and interpretation v. Presentation.

Views of Specific Forensic Techniques

Table 7 presents the perceived degree of error (1 = “none at all”; 5 = “high degree”) embedded in eight types of evidence (five forensic and three non-forensic), in the order of perceived accuracy, from most accurate (lowest mean error score) to the least (highest mean error score). As can be seen from Table 7, DNA is viewed as the most accurate type of evidence, followed by fingerprints, hair comparison, bite marks and shoeprints. The non-forensic types of evidence – alibi, confession and eyewitness testimony – are at the bottom of the table, meaning that all types of forensic evidence were perceived by respondents to be more accurate than other forms of evidence. This ranking reflects considerable trust in forensic techniques.

Table 7. Perceived Level of Error by Type of Evidence (n = 1,342) a

a Percentage of participants per level of perceived error (1 = “none at all” to 5 = “high degree”), for each type of evidence. The different types of evidence are ordered by perceived accuracy (low to high mean error score).

b Within-subjects analysis of variance was conducted resulting in a p value ≤ 0.001. The p value was validated using a permutation test which is robust to normality and sphericity assumptions. For more details, see Welch (Reference Welch1990).

c All post hoc comparisons (using Bonferroni adjustment) are statistically significant except for Hair comparison v. Bite marks and Alibi v. Confession.

As stated above, it is of interest to compare public views to more objective, systematic assessments of the various forensic disciplines. Table 8 compares the mean scores presented in Table 7 with the PCAST assessment (President’s Council of Advisors on Science and Technology 2016). As can be seen from Table 8, the Israeli public’s views correspond with the PCAST assessment with regard to DNA and fingerprint analysis (President’s Council of Advisors on Science and Technology 2016, 75, 101), but there is considerable divergence between them concerning the other disciplines. Bite marks, which were discredited in the PCAST report (President’s Council of Advisors on Science and Technology 2016, 87), were ranked as less error prone than footwear analysis, which “lacks sufficient support” (but is not discredited) according to PCAST (President’s Council of Advisors on Science and Technology 2016, 117). Hair analysis, which was also dismissed by PCAST for the purpose of personal identification (President’s Council of Advisors on Science and Technology 2016, 120), was perceived by respondents as the third most reliable discipline. The size of the gaps between the disciplines is also noteworthy: hair analysis is perceived to be only slightly more error prone than fingerprints, a fact that might suggest that the Israeli public does not clearly discern between the most and least reliable forensic disciplines. Taken together, these rankings portray a somewhat naive view of forensic evidence, or, in other words, the public is often incorrect in the legitimacy it ascribes the police through the prism of forensic science.

Table 8. A Comparison of Scientific Assessment and Perceived Reliability of Forensic Disciplines Identified in this Study

a PCAST = President’s Council of Advisors on Science and Technology (2016).

b Mean perceived error scores are also noted in Table 7; lower score = higher perceived reliability.

c Foundationally valid = the method is repeatable, reproducible and accurate; for a detailed explanation, see President’s Council of Advisors on Science and Technology (2016, 47).

DISCUSSION

Given the intrinsic relationship between forensic sciences and the police, and the significant (and growing) role of science and technology in policing more generally (Williams Reference Williams and Newburn2008), we found it important to examine how the general public perceives forensic evidence, under the assumption that trust in forensic evidence sheds light on police legitimacy more generally. It would be difficult to place little trust in police methods and technologies, but at the same time view the police as a legitimate authority (and vice versa). Nevertheless, we also found it important to examine whether the legitimacy that citizens ascribe to the police through the prism of forensic evidence is justified.

The analysis of the survey data reveals rather complex findings: when asked generalized questions about forensic evidence, respondents display considerable trust in the ability of forensics to reliably identify the perpetrator of a crime. At the same time, in line with the findings of Ribeiro et al. (Reference Ribeiro, Tangen and McKimmie2019), when asked about specific stages of the forensic process, a large share of the respondents expressed the view that substantial human judgement and considerable error are part of all stages of the forensic process. This might come as a surprise given the fact that forensic science is often perceived to be rooted in technology and hence largely independent of human judgement, a likely outcome of the CSI effect (Ribeiro et al. Reference Ribeiro, Tangen and McKimmie2019). These more critical views are, to a certain degree, more “correct”, as they are more in line with the National Research Council and PCAST reports (Committee on Identifying the Needs of the Forensic Sciences Community, National Research Council 2009; President’s Council of Advisors on Science and Technology 2016).

Nevertheless, a relatively high percentage of respondents perceive a low degree of error across all of the forensic techniques. Generally, the public considers forensic evidence (including techniques known to be untrustworthy) to be more reliable than non-forensic evidence, such as eyewitness testimony. It is particularly noteworthy that even bite mark analysis, which was discredited by the President’s Council of Advisors on Science and Technology (2016, 87), is considered to be more reliable than shoeprints and all types of non-forensic evidence. This overestimation of the reliability of bite marks is in line with the findings of Ribeiro et al. (Reference Ribeiro, Tangen and McKimmie2019) and Kaplan et al. (Reference Kaplan, Ling and Cuellar2020). Overall, it appears that the public is not well informed about the relative reliability of different forensic disciplines. Aside from DNA and fingerprint analysis – individuals assign similar scores to disciplines that are regarded very differently by PCAST, and even rank some of the less reliable ones above others, which are considered by PCAST to be more reliable.

Whether or not respondents are correct in their evaluations of specific forensic techniques, the findings of the present study overall appear to indicate considerable public trust in forensic science as a field, as well as in the specific forensic disciplines. This reflects positively on police legitimacy, at least through the prism of forensics. At the same time, as shown above, the public is often incorrect in its assessments of forensic evidence, meaning that the popular legitimacy that the police gain from their involvement in forensics may not be justified. Additionally, through the lens of populism and potential public pressure (Roberts et al. Reference Roberts, Stalans, Indermaur and Hough2002; Roberts Reference Roberts2018), the findings suggest that the public may be influenced by misinformation and is not fully aware of the field’s limitations. Populist attitudes based on insufficient or inaccurate information could have a negative impact on the weight accorded to various types of forensic evidence in the criminal justice system.

It is interesting to note that a comparison between the current findings and those of Smith and Bull (Reference Smith and Bull2012) from the UK reveals that the participants of the current study exhibit a considerably higher degree of trust in forensic science (although a larger variability in responses). This finding is particularly interesting given that trust in the Israel Police is generally lower than that in the UK, USA and many other countries (Hadar Reference Hadar2009). In contrast to what might be expected, it may be that Israeli citizens do not associate forensic science with the police, which raises the broader question of how attitudes toward forensic evidence develop and whether such attitudes are actually connected to police legitimacy. It is further advised to advance research in this area by measuring attitudes toward police legitimacy and how they correlate with attitudes toward forensic science.

Beyond the main finding, our analysis draws attention to the difference between responses to the survey items that enquired about forensic science in general, and responses to more specific questions about the different stages of the forensic process. As noted above, respondents expressed general, positive orientation toward forensic science, but when asked specific questions that prompted them to delve more deeply into the forensic process and envision the possibility of error at its various stages, they express more critical views. This finding is in line with the conclusions of Pickett (Reference Pickett2019), who surveyed public views of punitive policies, and found that specific questions encouraged respondents to think in a more explicit and critical manner. His explanation echoes the psychological literature on attitude–behaviour relations (for a review, see Kruglanski et al. Reference Kruglanski, Katarzyna Jasko, Maxim Milyavsky, Baldner and Pierro2015), and specifically the notion of behaviour focus, which maintains that general attitudes are only weakly related to behaviour, contrary to more specific attitudes about the behaviour itself (Fishbein and Ajzen Reference Fishbein and Ajzen1975; Ajzen Reference Ajzen1980, Reference Ajzen, Kuhl and Beckmann1985, Reference Ajzen, Deaux and Snyder2012, Reference Ajzen2015). Thus, these findings underscore the importance of using specific, nuanced questions in research on public attitudes, especially when their behavioural ramifications are of interest. Interestingly, in contrast to the questions about the stages of the forensic process, the questions about the specific forensic disciplines did not generate more critical views. We suspect that the overall trust in forensic science was not challenged by these questions, because they enquired about each discipline as a whole and thus did not “force” the respondent into more explicit and critical thinking. In this sense, they behaved as “general” rather than “specific” questions.

In terms of implications, as forensic techniques are used in court despite the fact that the reliability and validity of many are unknown, it is important to continue the effort to study the accuracy and error rates of forensic sciences, as called for in the PCAST report (President’s Council of Advisors on Science and Technology 2016). In addition, it is recommended that the findings of these studies be given wide circulation, in order to provide the public with accurate information and contain the potential negative effects of populism. When forensic evidence is presented in court, it should be accompanied by detailed explanations and data regarding the reliability of the different techniques, including error rates. In cases where these data are not available, such evidence should be presented along with a note of caution. In countries that use a jury system, this information should be presented to the jurors, and in countries that use a bench trial, it is advised to make this information publicly available.

A few limitations of the present study should be noted. First, our sample consists of the adult Jewish population in Israel. We thus encourage future research to replicate this study in other communities, as well as in countries that use different judicial systems. Second, local events may have had an impact on the respondents’ evaluations of various forensic disciplines. As noted earlier, controversy over the Zadorov case centred around shoeprint and hair evidence. The weight of shoeprint evidence was debated repeatedly during the court proceedings (Hovel 2013), potentially undermining respondents’ trust in this specific forensic discipline. This could explain why shoeprints were regarded as the least accurate type of forensic evidence in our study. With regard to hair analysis, which was ranked third for accuracy after DNA and fingerprints, it is possible that the media preoccupation with mitochondrial DNA from hair led to an erroneous public association of hair analysis with DNA. This may have created a “halo effect” (Thorndike Reference Thorndike1920; Nisbett and Wilson Reference Nisbett and Wilson1977), which enhanced the perceived accuracy of this practice. Thus, we again encourage replications of our study and analysis in various social contexts.

CONCLUSIONS

The popular legitimacy of the police is anchored, among other things, in the (perceived) reliability of police techniques. Accordingly, this study sets out to illuminate the attitudes of the general public toward the trustworthiness of forensic evidence, and the extent to which these views correspond with objective assessments of the reliability of specific forensic techniques. Our findings reveal that the public places much trust in forensic science in general, as well as in specific forensic disciplines. At the same time, less trust is expressed when questions lead respondents to consider the details of the forensic process at different stages. How “correct” is the public in its assessments? The high regard for DNA and fingerprint analysis corresponds to the findings of objective research, but the public appears to be less aware of the limitations of other forensic techniques. Thus, through the prism of forensics, the public appears to attribute considerable legitimacy to the police, albeit not always justifiably. It is therefore essential to continue research on the reliability of different forensic techniques as called for in the PCAST report (President’s Council of Advisors on Science and Technology 2016), and – no less important – to make these findings publicly available, both in reports to juries and in public statements.

Appendix 1. Sections of the Questionnaire Used in this Study (Translated from Hebrew)

In this section we will ask you a number of general questions about forensic science and forensic evidence.

“Forensic evidence” refers to scientific evidence that is used to solve crimes and is part of the criminal process in court, such as DNA tests and fingerprint analysis.

To what extent do you agree or disagree with each of the following statements, where 1 = “completely disagree” and 5 = “completely agree”.

For the following questions, imagine that a crime has occurred and forensic evidence remains at the crime scene. The police have identified a suspect who will be tried for the crime.

Please think about the entire process involved in processing forensic evidence – from the stage of the first visit to the crime scene, through the collection and analysis of the evidence, to its presentation in court. For each step, please answer both questions (where 1 = “none at all” and 5 = “high degree”).

Step A. The Collection Process (Finding Forensic Evidence at the Crime Scene, Documenting and Collecting It)

Step B. The Storage Process (the Transfer of the Forensic Evidence from the Crime Scene to the Laboratory, and its Storage)

Step C. The Process of Examination, Analysis and Interpretation (Examination and Analysis of the Forensic Evidence in the Laboratory, Interpreting its Meaning, and Documenting the Results of the Analysis)

Step D. The Reporting Process (Writing and Editing the Results of the Forensic Analysis for Lawyers and the Court)

Step E. The Presentation Process (Presenting the Results of the Forensic Analysis in Court)

The next section deals with different types of evidence in criminal law.

In your opinion, to what extent is each type of evidence noted below prone to error, where 1 = “none at all” and 5 = “high degree”.

Naomi Kaplan-Damary is a lecturer at the Institute of Criminology, Faculty of Law, Hebrew University of Jerusalem. Her research interests include the scientific foundations of forensics, perceptions of forensic science, forensic reporting in court, cognitive biases in forensic examination, wrongful convictions and probabilistic evaluation of forensic evidence. Working with both the Israel Police and Public Defender’s Office, Naomi is a primary investigator at the Center for Statistics and Applications in Forensic Evidence under the auspices of the National Institute of Standards and Technology, and a member of the Organization of Scientific Area Committees for Forensic Science in the USA.

Tal Jonathan-Zamir is an Associate Professor at the Institute of Criminology, Faculty of Law, Hebrew University of Jerusalem. Her work, published in leading journals, focuses on policing, particularly police–community relations and evidence-based policing. She has investigated police legitimacy and procedural justice from the perspective of citizens, communities, police officers and neutral observers, in diverse contexts such as routine encounters, security threats, protest events, airport security and at the street level. She has also examined the psychological mechanisms underlying police officers’ orientation to evidence-based policing, effective mechanisms for police training, and the effects of COVID-19 on police–community relations in Israel.

Gali Perry is a lecturer at the Institute of Criminology, Hebrew University of Jerusalem. Her research interests include the policing of political extremism, political violence, terrorism and longitudinal research designs.

Eran Itskovich is a PhD candidate at the Institute of Criminology of the Hebrew University of Jerusalem. His research interests include the sociology of crime, wrongful convictions and political violence. He was awarded the Rothschild Academic Excellence Award for PhD Students in honour of Professor David Weisburd, the Robert Wistrich Prize for Outstanding Advanced Students, and the Olivier Vodoz Prize for the Study of Racism and Antisemitism.

Footnotes

3 For more details on Midgam’s modus operandi, see Midgam (2024).

4 Unfortunately, it was not possible to compare our data with that of the CBS regarding education due to a different definition of the minimum age included in the samples (i.e. 18 and 15 years, respectively), and regarding income as the CBS does not provide raw data on this topic.

5 As mentioned, items were based on previous instruments developed by Smith and Bull (Reference Smith and Bull2012, Reference Smith and Bull2014) and Ribeiro et al. (Reference Ribeiro, Tangen and McKimmie2019). The survey items were translated into Hebrew, and the response scales were replaced with a five-point Likert scale. This was done for the following reasons: the 100-point scale in some of the original studies was seen to be unwieldly, and for the purposes of the present study, no more accurate than a five-point scale. Moreover, a standardized scale for all questions was chosen in order to simplify the questionnaire and make it more understandable for the respondents. Finally, not all types of forensic evidence included in Ribeiro et al. (Reference Ribeiro, Tangen and McKimmie2019) were included in the present study – we only focused on those examined in the PCAST report (President’s Council of Advisors on Science and Technology 2016), in order to enable a comparison of perceptions with objective scientific assessments. Three common non-forensic types of evidence (alibi, confession and eyewitness testimony) were added to our study for purposes of comparison.

References

Ajzen, Icek. 1980. Understanding Attitudes and Predicting Social Behavior. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
Ajzen, Icek. 1985. “From Intentions to Actions: A Theory of Planned Behavior.” Pp. 1139 in Action Control: From Cognition to Behavior, edited by Kuhl, J. and Beckmann, J.. Heidelberg: Springer-Verlag.CrossRefGoogle Scholar
Ajzen, Icek. 2012. “Attitudes and Persuasion.” Pp. 367–93 in The Oxford Handbook of Personality and Social Psychology, edited by Deaux, K. and Snyder, M.. New York: Oxford University Press.Google Scholar
Ajzen, Icek. 2015. “The Theory of Planned Behaviour is Alive and Well, and Not Ready to Retire: A Commentary on Sniehotta, Presseau, and Araújo-Soares.” Health Psychology Review 9(2):131–7.CrossRefGoogle ScholarPubMed
Black Women’s Blueprint, Inc. 2019. “Survivor Stories: Joan Little.” Retrieved 29 February 2024 (https://web.archive.org/web/20230402094523/https://www.mamablack.org/single-post/2019/02/17/survivor-stories-joan-little).Google Scholar
Casillas, Christopher J., Enns, Peter K., and Wohlfarth, Patrick C.. 2011. “How Public Opinion Constrains the US Supreme Court.” American Journal of Political Science 55(1):7488.CrossRefGoogle Scholar
Chen, Serena and Chaiken, Shelly. 1999. “The Heuristic–Systematic Model in its Broader Context.” Pp. 7396 in Dual Process Theories in Social Psychology, edited by Chaiken, S. and Trope, Y.. New York: Guilford Press.Google Scholar
Cole, Simon A., and Dioso-Villa, Rachel. 2006. “CSI and its Effects: Media, Juries, and the Burden of Proof.” New England Law Review 41:435–71.Google Scholar
Committee on Identifying the Needs of the Forensic Sciences Community, National Research Council. 2009. Strengthening Forensic Science in the United States: A Path Forward. Washington, DC: National Academies Press.Google Scholar
Curran, Paul G. 2016. “Methods for the Detection of Carelessly Invalid Responses in Survey Data.” Journal of Experimental Social Psychology 66:419.CrossRefGoogle Scholar
Dai, Mengyan, Frank, James, and Sun, Ivan. 2011. “Procedural Justice during Police–Citizen Encounters: The Effects of Process-Based Policing on Citizen Compliance and Demeanor.” Journal of Criminal Justice 39(2):159–68.CrossRefGoogle Scholar
Desmond, Matthew, Papachristos, Andrew V., and Kirk, David S.. 2016. “Police Violence and Citizen Crime Reporting in the Black Community.” American Sociological Review 81(5):857–76.CrossRefGoogle Scholar
Dickson, Eric S., Gordon, Sanford C., and Huber, Gregory A.. 2022. “Identifying Legitimacy: Experimental Evidence on Compliance with Authority.” Science Advances 8(7):eabj7377.CrossRefGoogle ScholarPubMed
Dolev, Daniel. 2018. “Shai Nitzan against the Background of the Rada Case: We Will Not Change our Position Due to Public Pressure.” Walla News, 6 November 2018, retrieved 2 April 2023 (https://news.walla.co.il/item/3198596).Google Scholar
Eckert, William G. 1996. Introduction to Forensic Sciences. Boca Raton, FL: CRC Press.Google Scholar
Eldridge, Heidi. 2019. “Juror Comprehension of Forensic Expert Testimony: A Literature Review and Gap Analysis.Forensic Science International: Synergy 1:2434.Google ScholarPubMed
Fishbein, Martin and Ajzen, Icek. 1975. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research. Reading, MA: Addison Wesley Publishing Co.Google Scholar
Greene, Christina. 2015.“ ‘She Ain’t No Rosa Parks’: The Joan Little Rape–Murder Case and Jim Crow Justice in the Post-Civil Rights South.” Journal of African American History 100(3):428–47.CrossRefGoogle Scholar
Hadar, Yael. 2009. “The Israeli Public’s Trust in Government Institutions during the Last Decade.” Parliament 63, 24 December 2009, retrieved 29 June 2023 (https://www.idi.org.il/parliaments/3467/8205) (in Hebrew).Google Scholar
Hicklin, R. Austin, Eisenhart, Linda, Richetelli, Nicole, Miller, Meredith D., Belcastro, Peter, Burkes, Ted M., Parks, Connie L., Smith, Michael A., JoAnn Buscaglia, Peters, Eugene M., Rebecca Schwartz Perlman, and Eckenrode, Brian A.. 2022a. “Accuracy and Reliability of Forensic Handwriting Comparisons.Proceedings of the National Academy of Sciences 119(32):e2119944119.CrossRefGoogle ScholarPubMed
Hicklin, R. Austin, McVicker, Brian C., Parks, Connie L., LeMay, Jan, Richetelli, Nicole, Michael A. Smith, JoAnn Buscaglia, Rebecca Schwartz Perlman, Peters, Eugene M., and Eckenrode, Brian A.. 2022b. “Accuracy, Reproducibility, and Repeatability of Forensic Footwear Examiner Decisions.Forensic Science International 339:111418.CrossRefGoogle Scholar
Hicklin, R. Austin, Winer, Kevin R., Kish, Paul E., Parks, Connie L., William Chapman, Dunagan, Kensley, Richetelli, Nicole, Epstein, Eric G., Ausdemore, Madeline A., and Busey, Thomas A.. 2021. “Accuracy and Reproducibility of Conclusions by Forensic Bloodstain Pattern Analysts.Forensic Science International 325:110856.CrossRefGoogle ScholarPubMed
Houck, Max M. and Siegel, Jay A.. 2009. Fundamentals of Forensic Science. Burlington, MA: Academic Press.Google Scholar
Hovel, Revital. 2013. “Israel’s Supreme Court Rules Footprints Are Problematic Evidence.” Haaretz, 6 December 2013, retrieved 4 June 2023 (https://www.haaretz.com/2013-12-06/ty-article/.premium/footprints-ruled-problematic-evidence/0000017f-e162-d568-ad7f-f36b43210000).Google Scholar
Huang, Jason L., Curran, Paul G., Keeney, Jessica, Poposki, Elizabeth M., and DeShon, Richard P.. 2012. “Detecting and Deterring Insufficient Effort Responding to Surveys.” Journal of Business and Psychology 27:99114.CrossRefGoogle Scholar
Innocence Project. 2023. “DNA Exonerations in the United States (1989–2020).” Benjamin N. Cardozo School of Law, Yeshiva University, retrieved 27 June 2023 (https://innocenceproject.org/dna-exonerations-in-the-united-states/).Google Scholar
Kaplan, Jacob, Ling, Shichun, and Cuellar, Maria. 2020. “Public Beliefs about the Accuracy and Importance of Forensic Evidence in the United States.” Science and Justice 60(3):263–72.CrossRefGoogle ScholarPubMed
Kim, Young S., Barak, Gregg, and Shelton, Donald E.. 2009. “Examining the ‘CSI-Effect’ in the Cases of Circumstantial Evidence and Eyewitness Testimony: Multivariate and Path Analyses.” Journal of Criminal Justice 37(5):452–60.CrossRefGoogle Scholar
Klentz, Bonnel A., Winters, Georgia M., and Chapman, Jason E.. 2020. “The CSI Effect and the Impact of DNA Evidence on Mock Jurors and Jury Deliberations.” Psychology, Crime & Law 26(6):552–70.CrossRefGoogle Scholar
Koehler, Jonathan J. 2017. “Intuitive Error Rate Estimates for the Forensic Sciences.” Jurimetrics 57:153–68.Google Scholar
Kruglanski, Arie W., Katarzyna Jasko, Marina Chernikova, Maxim Milyavsky, Maxim Babush, Baldner, Conrad, and Pierro, Antonio. 2015. “The Rocky Road from Attitudes to Behaviors: Charting the Goal Systemic Course of Actions.” Psychological Review 122(4):598620.CrossRefGoogle ScholarPubMed
Lieberman, Joel D., Carrell, Courtney A., Miethe, Terance D., and Krauss, Daniel A.. 2008. “Gold versus Platinum: Do Jurors Recognize the Superiority and Limitations of DNA Evidence Compared to Other Types of Forensic Evidence?Psychology, Public Policy, and Law 14(1):2762.CrossRefGoogle Scholar
Lodge, Chloe, and Zloteanu, Mircea. 2020. “Jurors’ Expectations and Decision-Making: Revisiting the CSI Effect.” The North of England Bulletin 2:1930.Google Scholar
Martire, Kristy A., Ballantyne, Kaye N., Agnes Bali, Gary Edmond, Kemp, Richard I., and Found, Bryan. 2019. “Forensic Science Evidence: Naive Estimates of False Positive Error Rates and Reliability.” Forensic Science International 302:109877.CrossRefGoogle ScholarPubMed
Mastrofski, Stephen D. 1996. “Measuring Police Performance in Public Encounters.” Pp. 207–41 in Quantifying Quality in Policing, edited by Hoover, L. T.. Washington, DC: Police Executive Form.Google Scholar
Mazerolle, Lorraine, Bennett, Sarah, Antrobus, Emma, and Eggins, Elizabeth. 2012. “Procedural Justice, Routine Encounters and Citizen Perceptions of Police: Main Findings from the Queensland Community Engagement Trial (QCET).” Journal of Experimental Criminology 8:343–67.CrossRefGoogle Scholar
Mazerolle, Lorraine, Sarah Bennett, Jacqueline Davis, Sargeant, Elise, and Manning, Matthew. 2013. “Procedural Justice and Police Legitimacy: A Systematic Review of the Research Evidence.” Journal of Experimental Criminology 9:245–74.CrossRefGoogle Scholar
Midgam. 2024. “Common Questions.” Retrieved 15 February 2024 (https://www.midgampanel.com/clients/faq.asp).Google Scholar
Mnookin, Jennifer L., Cole, Simon A., Dror, Itiel E., Fisher, Barry A. J., Houck, Max M., Keith Inman, David H. Kaye, Jonathan J. Koehler, Glenn Langenburg, Michael Risinger, D., Rudin, Norah, Siegel, Jay, and Stoney, David A.. 2011. “The Need for a Research Culture in the Forensic Sciences.” UCLA Law Review 58(3):725–79.Google Scholar
Nagin, Daniel S. and Telep, Cody W.. 2017. “Procedural Justice and Legal Compliance.” Annual Review of Law and Social Science 13:528.CrossRefGoogle Scholar
Nagin, Daniel S. and Telep, Cody W.. 2020. “Procedural Justice and Legal Compliance: A Revisionist Perspective.” Criminology & Public Policy 19(3):761–86.CrossRefGoogle Scholar
Nisbett, Richard E. and Wilson, Timothy D.. 1977. “The Halo Effect: Evidence for Unconscious Alteration of Judgments.” Journal of Personality and Social Psychology 35(4):250–6.CrossRefGoogle Scholar
Peleg-Koriat, Inbal and Klar-Chalamish, Carmit. 2020. “The #MeToo Movement and Restorative Justice: Exploring the Views of the Public.” Contemporary Justice Review 23(3):239–60.CrossRefGoogle Scholar
Pennington, Nancy and Hastie, Reid. 1981. “Juror Decision-Making Models: The Generalization Gap.” Psychological Bulletin 89:246–87.CrossRefGoogle Scholar
Pennington, Nancy and Hastie, Reid. 1986. “Evidence Evaluation in Complex Decision-Making.” Journal of Personality and Social Psychology 51:242–58.CrossRefGoogle Scholar
Pennington, Nancy and Hastie, Reid. 1988. “Explanation-Based Decision-Making: Effects of Memory Structure on Judgment.” Journal of Experimental Psychology: Learning, Memory and Cognition 14:521–33.Google Scholar
Pennington, Nancy and Hastie, Reid. 1993. “The Story Model for Juror Decision-Making.” Pp. 192221 in Inside the Juror: The Psychology of Juror Decision-Making, edited by Hastie, R.. New York: Cambridge University Press.CrossRefGoogle Scholar
Pickett, Justin T. 2019. “Public Opinion and Criminal Justice Policy: Theory and Research.” Annual Review of Criminology 2:405–28.CrossRefGoogle Scholar
Podlas, Kimberlianne. 2006. “‘The CSI Effect’: Exposing the Media Myth.” Fordham Intellectual Property, Media and Entertainment Law Journal 16(2):429–65.Google Scholar
President’s Council of Advisors on Science and Technology. 2016. Report to the President – Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods. Washington, DC: Executive Office of the President of the United States. September 2016, retrieved 19 February 2024 (https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/PCAST/pcast_forensic_science_report_final.pdf).Google Scholar
Public Committee for the Prevention of False Convictions and their Correction. 2021. “Interim Report on the Subject of Forensic Evidence.” Retrieved 9 April 2023 (https://www.gov.il/BlobFolder/news/dantziger2/he/dantiger.pdf) (in Hebrew).Google Scholar
Ribeiro, Gianni, Tangen, Jason M., and McKimmie, Blake M.. 2019. “Beliefs about Error Rates and Human Judgment in Forensic Science.” Forensic Science International 297:138–47.CrossRefGoogle ScholarPubMed
Roberts, Julian. 2018. Public Opinion, Crime, and Criminal Justice. New York: Routledge.CrossRefGoogle Scholar
Roberts, Julian V., Stalans, Loretta J., Indermaur, David, and Hough, Mike. 2002. Penal Populism and Public Opinion: Lessons from Five Countries. New York: Oxford University Press.CrossRefGoogle Scholar
Saferstein, Richard. 2017. Criminalistics: An Introduction to Forensic Science. New York: Pearson.Google Scholar
Saks, Michael J. and Koehler, Jonathan J.. 2005. “The Coming Paradigm Shift in Forensic Identification Science.” Science 309(5736):892–5.CrossRefGoogle ScholarPubMed
Scherpenzeel, Annette C. and Bethlehem, Jelke G.. 2010. “How Representative Are Online Panels?” Pp. 105–32 in Social and Behavioral Research and the Internet: Advances in Applied Methods and Research Strategies, edited by Das, M., Ester, P., and Kaczmirek, L.. New York: Routledge.Google Scholar
Schweitzer, Nicholas J. and Saks, Michael J.. 2007. “The CSI Effect: Popular Fiction about Forensic Science Affects the Public’s Expectations about Real Forensic Science.” Jurimetrics 47(3):357–64.Google Scholar
Shelton, Donald E., Kim, Young S., and Barak, Gregg. 2006. “A Study of Juror Expectations and Demands Concerning Scientific Evidence: Does the CSI Effect Exist?Vanderbilt Journal of Entertainment and Technology Law 9(2):331–68.Google Scholar
Smith, Lisa L. and Bull, Ray. 2012. “Identifying and Measuring Juror Pre-Trial Bias for Forensic Evidence: Development and Validation of the Forensic Evidence Evaluation Bias Scale.” Psychology, Crime and Law 18(9):797815.CrossRefGoogle Scholar
Smith, Lisa L. and Bull, Ray. 2014. “Exploring the Disclosure of Forensic Evidence in Police Interviews with Suspects.” Journal of Police and Criminal Psychology 29(2):81–6.CrossRefGoogle Scholar
Starr, Michael and Silkoff, Shira. 2023. “16 Years On: Roman Zadorov Acquitted of Tair Rada’s Murder.” The Jerusalem Post, 30 March 2023, retrieved 2 April 2023 (https://www.jpost.com/israel-news/article-735889).Google Scholar
Straschnov, Amnon. 1999. “The Judicial System in Israel.” Tulsa Law Journal 34(3):527–35.Google Scholar
Sunshine, Jason and Tyler, Tom R.. 2003. “The Role of Procedural Justice and Legitimacy in Shaping Public Support for Policing.” Law & Society Review 37(3):513–48.CrossRefGoogle Scholar
Thorndike, Edward L. 1920. “A Constant Error in Psychological Ratings.” Journal of Applied Psychology 4(1):25–9.CrossRefGoogle Scholar
Tyler, Tom R. and Fagan, Jeffrey. 2008. “Legitimacy and Cooperation: Why Do People Help the Police Fight Crime in their Communities?Ohio State Journal of Criminal Law 6:231–75.Google Scholar
Tyler, Tom R. and Huo, Yuen J.. 2002. Trust in the Law: Encouraging Public Cooperation with the Police and Courts. New York: Russell Sage Foundation.Google Scholar
Tyler, Tom R. and Jackson, Jonathan. 2014. “Popular Legitimacy and the Exercise of Legal Authority: Motivating Compliance, Cooperation, and Engagement.” Psychology, Public Policy, and Law 20(1):7895.CrossRefGoogle Scholar
Tyler, Tom R. and Nobo, Caroline. 2023. Legitimacy-Based Policing and the Promotion of Community Vitality. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Ulery, Bradford T., Austin Hicklin, R., Buscaglia, JoAnn, and Roberts, Maria Antonia. 2011. “Accuracy and Reliability of Forensic Latent Fingerprint Decisions.Proceedings of the National Academy of Sciences 108(19):7733–8.CrossRefGoogle ScholarPubMed
US Department of Justice. 1987. Federal Sentencing Guidelines: Answers to Some Questions. NIJ Report no. 205. Washington, DC: US Government Printing Office.Google Scholar
Walters, Glenn D. and Colin Bolger, P.. 2019. “Procedural Justice Perceptions, Legitimacy Beliefs, and Compliance with the Law: A Meta-Analysis.” Journal of Experimental Criminology 15:341–72.CrossRefGoogle Scholar
Weimann-Saks, Dana, Peleg-Koriat, Inbal, and Halperin, Eran. 2019. “The Effect of Malleability Beliefs and Emotions on Legal Decision Making.” Justice System Journal 40(1):2138.CrossRefGoogle Scholar
Weisburd, David and Majmundar, Malay K. (editors). 2018. Proactive Policing: Effects on Crime and Communities. Washington, DC: National Academies Press.CrossRefGoogle Scholar
Welch, William J. 1990. “Construction of Permutation Tests.” Journal of the American Statistical Association 85(411):693–8.CrossRefGoogle Scholar
Williams, Robin. 2008. “Policing and Forensic Science.” Pp. 760–93 in Handbook of Policing, edited by Newburn, T.. Cullompton: Willan.Google Scholar
Winter, Ryan J. and Greene, Edith. 2007. “Juror Decision-Making.” Pp. 739–62 in Handbook of Applied Cognition, 2nd ed., edited by Durso, F. T., Nickerson, R. S., Dumais, S. T., Lewandowsky, S. , and Perfect, T. J.. Hoboken, NJ: John Wiley & Sons, Inc.CrossRefGoogle Scholar
Wolfe, Scott E., Nix, Justin, Kaminski, Robert, and Rojek, Jeff. 2016. “Is the Effect of Procedural Justice on Police Legitimacy Invariant? Testing the Generality of Procedural Justice and Competing Antecedents of Legitimacy.” Journal of Quantitative Criminology 32(2):253–82.CrossRefGoogle Scholar
Yanko, Adir and Raved, Ahiya. 2022. “Hair Found at Girl’s 2006 Murder Scene Matches Ex of Former Suspect.” Ynet News, 18 July 2022, retrieved 4 June 2023 (https://www.ynetnews.com/article/hknw9rf3q).Google Scholar
Figure 0

Table 1. Error Rates in Selected Forensic Disciplinesa

Figure 1

Table 2. Actual and Perceived Reliability of Different Forensic Techniques

Figure 2

Table 3. Sample Characteristicsa

Figure 3

Figure 1. General attitudes toward forensic science (n = 1,342). Histograms show the level of agreement with four statements reflecting general attitudes toward forensics. The statement is noted on the top of each histogram, along with the percentage of participants at each level of agreement, ranging from 1 = “none at all” to 5 = “high degree”. The means and standard deviations (SD) are noted at the top left of each histogram.

Figure 4

Table 4. Current Findings v. Smith and Bull (2012)

Figure 5

Table 5. Perceived Level of Error at Each Stage of the Forensic Process (n = 1,342)a

Figure 6

Table 6. Perceived Level of Human Judgement at Each Stage of the Forensic Process (n = 1,342)a

Figure 7

Table 7. Perceived Level of Error by Type of Evidence (n = 1,342)a

Figure 8

Table 8. A Comparison of Scientific Assessment and Perceived Reliability of Forensic Disciplines Identified in this Study