Transcendent Philosophy: An International Journal For Comparative Philosophy And Mysticism

Exploring Expert Opinions on the Content of Expert Witness Psychological Reports

Published by London Academy of Iranian Studies: December 2024

Volume 25, Number 36

Author(s):

Alireza Moafi
Expert Therapy, UK

Keywords: Practitioner Psychologist, Expert Report, Expert Witness Psychological Report

Abstract

The research examined the current state of expert witness psychological reports. Ten HCPC-registered, BPS-chartered practitioner psychologists shared their experiences and perspectives on qualifications, job titles, report preparation, content, and feedback from solicitors or court officials. Data were gathered through a 27-question survey and analysed using a thematic approach following Braun and Clarke’s (2019) six-step guide. This study explored three main areas: qualifications, report quality, and feedback from solicitors or courts. A critical review of the literature revealed significant variation in training, decision-making, knowledge of the field, and feedback from courts. Findings suggest a greater need for standardisation of expert witness psychological reports.

This is an Open Access article distributed under the terms of the Creative Commons Attribution licence ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.


Copyright © London Academy of Iranian Studies, 2025.

Introduction

The current study establishes the purpose and theoretical framework, including a summary of the methodology and a concluding overview. A detailed literature review will follow in the next section, critically evaluating the relevant studies and findings. This qualitative research employs a descriptive design to explore practitioner psychologists’ experiences of expert report writing for courts, gaining insights into qualifications, report content, and court feedback (Kim et al., 2017).

Various expert witness types exist, but practitioner psychologist (PP) expert witnesses are pivotal, serving the courts with professional and scientific guidance (Shapiro et al., 2015). While crown court experts address both adult and child cases, PPs are recognised as essential for providing psychological insights into the psyche of witnesses and relevant cases (Ireland, 2012). Different titles—clinical, counselling, educational, forensic, health, occupational, and sports and exercise psychologists—are regulated under the Health and Care Professions Council (HCPC) in the UK. Since 2009, these protected titles are held by PPs who may, with or without expert witness training, offer Expert Psychological Report (EPR) writing services. Titles like child or consultant psychologists also exist, creating confusion for those seeking expert witnesses for court reports (Ireland, 2012).

PPs operate at the intersection of psychology and law, using their expertise to assist courts (Rangaswamy, 2017). However, they must meet specific credential, training, and background requirements relevant to each case (Wolffram, 2018). Psychological assessments, including interviews, testing, and clinical judgement, are essential for courts when making decisions (Wygant & Lareau, 2015). The reliability and validity of EPR, as well as their professional accuracy, have been questioned. In Australia, magistrates distinguish between high- and low-quality reports but often find necessary information missing (Martire & Montgomery-Farrer, 2020). Concerns over reliability, validity, and professional demands have persisted in EPR (Faust & Ziskin, 1988).

Shanteau’s (1992) research on expert competence introduces the idea that “competence depends on five components: knowledge of the domain, psychological traits, cognitive skills, decision strategies, and task characteristics.” The courts’ reliance on PPs for EPR is justified by their ability to bridge psychological and legal knowledge, aiding the courts in decision-making processes (Zwartz, 2018).

In family court proceedings, PPs often assess factors like personality disorders, substance abuse, and family violence (O’Neill et al., 2018). However, criticisms include poor reliability, varying expertise levels, and bias (Zumbach & Koglin, 2015). There is limited research on how PPs integrate court feedback to improve EPR quality. Recommendations for standardisation in England and Wales include adopting elements like the US Daubert Criterion and the UK’s Civil Procedure Rules Practice Direction (Ireland, 2012). Enhanced cooperation between law and psychology can help courts evaluate expert witness credibility, reducing common-sense errors and potential misjudgements in legal settings (Curci et al., 2020).

Research Aim

The current qualitative descriptive research aims to examine how practitioner psychologists maintain competence and assure the quality of expert psychological reports as well as integrate court feedback to maintain quality when providing expert opinion within court. Using the opportunity sampling technique to analyse data, the study’s sample includes 10 PP who provided EPR opinions for the courts. All psychologists are based in the United Kingdom. These emerged as a follow-up to Ireland’s (2012) findings and recommendations. The questions are as follows:

RQ1: How do practitioner psychologists maintain their competence and assure the quality of their expert psychological reports when providing expert opinion in the court?

RQ2: How do practitioner psychologists integrate the court’s feedback to maintain their psychological reports’ quality?

Literature Review

The literature review highlights reliability and validity issues with tools used by authors of Expert Psychological Reports (EPR), as well as disparities between psychological and legal disciplines. Ireland’s (2012) study is central to the current research, representing one of the most comprehensive and challenging investigations into EPR quality in England and Wales. Recommendations from Ireland’s study guide courts on evaluating EPR quality, specifically examining the application of Civil Procedure Rules Practice Direction (UK) and proposing criteria for admissibility—such as the Daubert standards, originally developed in the US legal system.

This study largely follows up on Ireland (2012), questioning expert witness competency amid findings by Carter (2012) that 20% of expert witnesses lack sufficient qualifications. The relevance of Ireland’s study is reinforced by cases like O’Leary v Mercy University Hospital Cork Ltd [2019] (Slingo, 2019; Bahran & Townsend, 2019) and a BPS (2021) survey. Ireland’s findings have been critiqued by several authors, including Reed (2012, 2016), and Mason (2012). Additionally, semi-structured interview data were analysed using Braun et al.’s (2014) six-step thematic analysis.

This study will evaluate psychological report research and examine gaps in psychological theory application within EPR literature. Although ample evidence supports psychometric test validity, discrepancies in EPR standardisation for court admissibility remain (Ireland, 2012). The literature review will first discuss the role of practitioner psychologists (PP), and the quality and content of EPRs regarding court admissibility, including feedback provided to PPs.

The search strategy used for this study included database searches on PubMed, Web of Science, EBSCO Host, and JSTOR, alongside Google Scholar, ResearchGate, and the TEES Library. Key terms included “practitioner psychologists,” “practitioner psychologists AND quality and expert opinion,” and other related phrases. Relevant data were primarily peer-reviewed, providing a framework for the key topics in the literature and highlighting gaps in expert psychological witnesses’ descriptions of their professional experiences.

Practitioner Psychologists: Definition, Qualification and Training

The first research question of this study is: How qualified are psychologists offering Expert Psychological Report (EPR) services? This section critically reviews literature that defines the boundaries and expectations for qualifications among practitioner psychologists (PP), emphasising the essential role of these experts within the UK legal system. Seven legally protected PP titles, regulated by the Health and Care Professions Council (HCPC) since 2009 (Farndon, 2016), encompass those eligible to offer EPR services, though they may do so with or without expert witness training. Confusion often arises when non-PP professionals using related titles provide EPRs, complicating matters for those seeking expert psychologists for court reports (Ireland, 2012).

The British Psychological Society (BPS) first produced guidelines in Psychologists as Expert Witnesses: Guidelines and Procedures for England, Wales, and Northern Ireland (BPS, 2016). Following HCPC’s designation of protected titles, PP roles evolved, with institutions like Cardiff University offering optional expert witness certification for PPs interested in EPR services. Thus, understanding PP qualifications, associated training, and their implications for court proceedings is crucial in identifying the profession’s challenges.

PPs offering EPR services should demonstrate specialised psychology knowledge aligned with Civil Procedure Rules (CPR) for legal and clinical cases. This entails evidence-based practices, reliable assessment tools, and competency-based evaluations bridging legal expectations (as per CPR guidelines) with practical psychological knowledge (Melton et al., 2018; Weissman, 2012). However, distinctions in principles and operational constructs between psychology and PP can affect the clarity of these roles (Melton et al., 2018).

There remains a debate on qualifications: Ireland (2012) advocates for HCPC and chartered BPS membership, while Mason (2012) suggests either should suffice. Required training should cover the application of legal and psychological principles to human behaviour, legal nuances, and familiarity with CPR roles for expert witnesses. This includes up-to-date professional training (Bartol & Bartol, 2017; Melton et al., 2018), enabling PPs to deliver reliable, scientifically-backed insights that meet judicial standards (Shapiro et al., 2015). However, they must also satisfy the Turner Rule for UK admissibility, ensuring their contributions align with court jurisdictional criteria (Melton et al., 2018).

PPs apply psychological assessments to support judicial processes in civil, criminal, and family courts, covering competencies like client mental state, custody evaluations, criminal responsibility, and personal injury claims (Jackson & Roesch, 2015; Wygant & Lareau, 2015; Young & Brodsky, 2016). Research supports a strong alignment between PP expert opinion and court outcomes (Zwartz, 2018), demonstrating their integral role in legal decision-making.

Finally, PPs must ensure that their EPRs meet standards of reliability, validity, and admissibility, avoiding the ten criticisms highlighted by Ireland (2012) to maintain integrity in their contributions to the legal system.

EPR Quality

The need to follow up Ireland’s research was also highlighted by Neal et al. (2020), as they examined psychological assessment via a two-part psychological assessments research. The researcher involved assessing 364 assessment psychological tools through psychologists’ reports used in court cases. The second part was the analysis of psychological assessments. The primary part results reveal that almost all the assessment tools utilised were empirically tested (90%).

However, only 67% identified as accepted within the field. Neal et al. (2020) demonstrates that legal challenges to the admission of EPR are not common in the legal setting. Challenges to the foremost scientifically suspect tools are almost non-existent. The courts often do not challenge evidence provided by psychological experts, and once they do, predominantly judges do not carry out the scrutiny required by law. However, it is noted that most judges do not hold specialised psychological knowledge that would guide feedback, as well as the inadmissibility of evidence (Neal et al., 2020), which may serve as a central barrier to validation of these tools and reports.

Grisso (2010) analysed feedback provided from 62 EPR and identified thirty issues with each report. The issues that were most common were expression of opinions without sufficient evidence and explanations and without little or no data or logic. Thus, presenting significant concern in terms of the admissibility of reports that are provided with lacking evidence or bias. The above research further indicates the need for Irelands (2012) report and for investigating the quality of EPR which include objectivity and sufficient evidence.

Reports of Experts Lacking Credentials in Trials

PP credentials and EPR contents can have a substantial impact on judicial decisions, so poor-quality reports can adversely affect legal cases. For instance, Slingo (2019) noted that eight defendants’ charges were dismissed after the expert witness was discredited for lacking essential credentials. Judge Nicholas Loraine-Smith criticised the expert, stating they were “not of suitable calibre,” had “no understanding of expert duties,” and lacked formal qualifications and peer-reviewed work. This case highlights the risks of hiring unqualified experts, with judges able to assess evidence more objectively without such flawed testimony (Convery, 2019).

In O’Leary v Mercy University Hospital Cork Ltd [2019] IESC 48, Mr Justice MacMenamin clarified that experts must stay within their expertise. In Van Oord UK Ltd v. Allseas UK Ltd, an expert’s testimony was deemed “entirely worthless” as it lacked independent verification and rigour, making a “mockery of the oath” and reducing the expert to a mere “mouthpiece” for claimants (Convery, 2019).

Bahran and Townsend (2019) observed ongoing issues with expert evidence quality, citing a high-profile fraud trial at Southwark Crown Court in May 2019 that collapsed due to an unqualified expert witness. The British Psychological Society (2018) reported that only around two-thirds of members serving as expert witnesses had specific training, despite the demanding nature of courtroom settings. Similarly, Ireland (2012) found that just one-fifth of report writers had appropriate training.

Ireland’s (2012) report faced criticism, particularly from Reed (2016), who questioned its transparency and methodology, noting Ireland’s experience in criminal and civil, rather than family law, settings. Reed (2016) also questioned why courts were not enforcing existing guidelines on expert evidence. Additionally, Mason (2012) criticised Ireland’s study for methodological and ethical flaws, pointing out its partial funding by the Family Justice Council (FJC) and asserting that it supported an “anti-expert-witness, efficiency savings agenda” which led to sensationalised media coverage. The Irelands study’s funding is also questioned as part of the funding was provided by the Family Justice Council (FJC), and the findings were reported as ‘facts’ by the media. Mason (2012) criticisms of Ireland (2012).

Tucker (2014) proposed a model designed to enhance report validity by providing multidisciplinary teams with access to balanced and quality-assured expert witness reports, resulting in fewer complaints. They argued that this model offers an educational structure, enabling trainee expert witnesses to receive essential guidance and supervision. Cardiff Graduate School’s evaluation demonstrated the model’s ability to address the five main concerns identified in Ireland’s (2012) report.

To mitigate confirmation bias and maintain witness objectivity, Otgaar et al. (2017) stressed the importance of employing alternative scenarios, with peer review to validate findings. Ireland (2012) also identified peer review as crucial and incorporated it into interview questions in her study. Research has often focused on the reliability and validity of expert witness reports, yet there is a need to define and evaluate “high-quality psychological evaluations” further. Wettstein (2005) highlighted the importance of developing structured assessment procedures and establishing incentives for producing accurate reports, with professional organisations playing a key role.

Judges determine the admissibility of expert testimony based on the general acceptance of the expert’s methods within the scientific community (Hosch et al., 2009). However, Cutler and Kovera (2011) observed that a lack of standard safeguards often leaves judges and jurors without sufficient insight to assess evidence thoroughly. Kovera et al. (2002) reviewed the United States v. Libby (2006) case, examining how expert characteristics impact the admissibility of psychological evidence. Bronstein (2016) recommended thorough preparation for cross-examination as part of a comprehensive witness evaluation process.

Efforts to evaluate expert witness reports scientifically have included meta-analysis, which allows for the pooling of variables across studies (McAuliff & Kovera, 2012). Although ecological validity appears low in some studies (e.g., Anderson & Bushman, 1997; Bornstein, 1999), its influence on experimental results is limited. The quality of psychological evidence in court remains uncertain, a concern given that forensic report accuracy is essential for sound judicial outcomes (Zwartz, 2018).

Despite guidance on report structure from several experts (e.g., Borum & Grisso, 1996; Heilbrun, 2001; Gutheil, 1998), flexibility in EPR structure can lead to variations influenced by individual writing styles.

Overall, there is a noted need to examine the quality of reports and the standardisation of these techniques that are admissible in court. Newman (1994) reported that the need for further research on the quality of standard of reports should be expected when expressing opinions in legal cases involving children. Examining the standard of psychological reports, Faust and Ziskin (1988) found that professionals often do not achieve a reliable or valid conclusion in their reports, which undermines their judgements’ accuracy, which does not supersede that of laypersons, thus questioning if their expertise meets the legal standards. Bach and Gudjonsson (1998) claimed that there is evidence to suggest that psychologists could also be producing inadequate psychological evidence, which is then utilised in court.

However, there is lacking standardisation regarding if a report should be included. This standardisation varies based on the admissibility criteria in the UK, the Turner rule, and the content and quality of the report’s relevance. There are limitations and shortcomings regarding the underlying research and associated issues with admissibility (Cutler, 2009). Culter and Kovera (2011) reviewed the literature on psychological testimony, regulations relating to admissibility, and the principles identifying whether items of evidence may be received by the Court (Law and Martin, 2009). Findings demonstrated that due to the standard court safeguards not increasing the lack of sensitivity to research flaws and the judiciary’s lack of expertise in various types of quality of expert evidence, further research is required to enable the means of assisting professionals eligible to evaluate expert testimony. Future research is needed to distinguish reliable from unreliable reports and improvements on the methodological flaws if the evidence is weak.

Research on legal professionals in England and Wales demonstrates that there is a significant need to address the admissibility of evidence and expert witness bias. Considering controversial research evidence on admissibility, they described the same obstacles of social science evidence atasheir international colleagues (Robertson and Broadhurst, 2019). Gatowski et al. (2001) survey showed that most judges support the role of “gatekeeping” described by Daubert. Also, more weight was attributed to the broad acceptability as an admissible criterion by the judges. Whilst the judges differentiated between “scientific” and “technical or otherwise specialised” knowledge, not even the application of Daubert had much effect on types of expert evidence classified as “science” or “nonscience.”

The Law Commission for England and Wales suggested a consultation paper (No.190) for a criterion to be adopted similar to the Daubert Standard to assist and upgrade the rules of evidence related to the admissibility of psychological evidence, which proposed that “judges should only admit expert-opinion evidence after determining that it is sufficiently reliable.” The Law Commission also examined Daubert and found that it is a possibly more valid and standardized admissibility criterion (Edmond & Roach, 2011). In the US, the Daubert Standard (1993 509 US 579) state that:

Under the Daubert standard, the factors that may be considered in determining whether the methodology is valid are: (1) whether the theory or technique in question can be and has been tested; (2) whether it has been subjected to peer review and publication; (3) its known or potential error rate; (4) the existence and maintenance of standards controlling its operation; and (5) whether it has attracted widespread acceptance within a relevant scientific community.

However, in the UK, the essential admission of expert evidence rule is expressed in the case Folkes v. Chadd, (1782 3 Doug KB 157), where it was held:

The opinion of scientific men upon proven facts may be given by men of science within their science. An expert’s opinion is admissible to furnish the court with scientific information, which is likely to be outside the judge or jury’s experience and knowledge.

And in R. v. Turner (60 Cr App R 80, 1975), the court said:

An expert’s opinion is admissible to furnish the court with scientific information, which is likely to be outside the judge or jury’s experience and knowledge. Suppose on the proven facts, a judge or jury can form their conclusions without help. In that case, the opinion of an expert is unnecessary […] the fact that an expert witness has impressive scientific qualifications does not by that fact alone make his opinion on matters of human nature and behaviour within the limits of normality any more helpful than that of the jurors themselves; there is a danger that they think it does.

Although the Daubert standards established in 1993, stated that the Daubert Standard to assist and upgrade the rules of evidence related to the admissibility of psychological evidence, which proposed that “judges should only admit expert-opinion evidence after determining that it is sufficiently reliable.” The Law Commission also examined Daubert as a source of more accurate, reliable, admissibility criteria (Edmond & Roach, 2011). Such evidentiary criteria are infrequently adopted for a framework of the admissibility of an expert witness in research (Shapiro et al., 2015) and so implications for the UK legal system to be evaluated. Guidelines are provided for distinguishing between pseudoscientific and have questionable validity in those areas with an adequate scientific basis. Moreover, an in-depth analysis specialises in these four categories of Daubert found eight significant trends in psychological expert reports (McCan et al., (2015).

Reliability, Validity and Invalid Scientific Evidence

Tadei et al. (2016) identified the key factors judges consider when evaluating expert testimony, based on the responses of 87 judges. The criteria they prioritised for credibility include the expert’s experience, research activity, falsifiability, error rate, peer-reviewed research, and scientific acceptance. Results showed that judges place primary emphasis on the expert’s work experience when assessing the reliability of evidence. However, many judges lack the specific training to evaluate expert reports thoroughly, highlighting a gap between the legal and scientific domains.

Judges often rely on the expert’s credentials and other persuasion factors, such as expertise and likeability, when assessing argument quality, especially if the evidence is weaker (Kovera & McAuliff, 2000; Gatowski et al., 2001). Guarnera et al. (2017) echoed concerns about bias and unreliability in forensic science, citing unstandardised report procedures, lack of evaluator training, and allegiance bias. These issues can lead to unfair outcomes, challenging psychological experts’ confidence when providing evaluation reports (EPR).

Psychological reports need to be scientifically rigorous and reliable, while directly addressing the questions posed by judges or solicitors. However, laypeople, including jurors, often struggle to identify errors in expert evidence (Austin & Kovera, 2015). Iudici et al. (2015) found recurring judgement errors in 46 EPRs, indicating that additional guidance is essential to avoid misinterpretations within juror decisions.

The expert witness’s reliability is essential to the judicial process, yet involuntary errors of perception or judgement can undermine testimony. Lavis & Brewer (2017) recommended that cross-examinations employ control techniques to assess evidence accuracy. Guarnera et al. (2017) proposed that setting a standardised system for measuring field reliability in evaluations could improve report accuracy.

O’Neill et al. (2018) found discrepancies between professional expectations and the actual quality of single-expert reports by psychologists. Though rated positively, certain report components fell short of expectations, underlining the need for further research to address quality concerns and make improvements.

In conclusion, the above has examined the literature dealing with PP qualifications and training, the multi-dimensional aspects of quality of EPR and the research related to the content, admissibility of EPR and the court’s feedback. The next chapter will outline and describe the methods and procedures of the current research in details.

Methods

This study’s research approach is qualitative, focusing on naturalistic explorations (Cypress, 2017). Qualitative research is rooted in the overarching principle of constructivism, wherein a phenomenon is explored based on its unique perceptions and experiences (Sutton & Austin, 2015). As opposed to quantitative research wherein an in-depth understanding of individuals’ experiences is not the purpose, qualitative research delves into the underlying messages that cannot be captured in typical survey instruments (Cypress, 2017).

The research design adopted in this study is descriptive, a type of research design that summarises individuals’ experiences through detailed descriptions and narrative (Lambert & Lambert, 2012; Vaismoradi et al., 2013). Like other qualitative designs, the descriptive design is naturalistic, pragmatic, and less controlled (Lambert & Lambert, 2012). However, the descriptive design is not intended to produce a theory just like grounded theory, understand individuals’ lived experience such as phenomenology, and integrate the connections between a phenomenon and its natural context such as a case study. Hence, the current research study will be more consistent with a qualitative descriptive design’s simplicity and pragmaticism.

The selection of a qualitative descriptive design is appropriate for exploring expert psychological witnesses’ descriptions of their professional experiences without resorting to the restrictive method of collecting survey questionnaires and standardised instruments. Compared to different qualitative research designs, for example, phenomenology and case study, qualitative descriptive research is less dependent on the methodology’s theoretical basis while maintaining the core characteristic of qualitative designs about determining themes to understand a phenomenon (Lambert & Lambert, 2012).

The application of clinical psychology (clinical assessment) by PP in a forensic setting has been considered one of the most controversial clinical psychology applications (Iudici et al., 2015; Kacperska et al., 2016). The controversy of assessment in the forensic setting is primarily based on previous studies that indicate that PP psychological reports should be improved to meet quality and reliability (Iudici et al., 2015; Kacperska et al., 2016). The research problem is based on the literature which indicates a poor quality of psychological reports used by expert witnesses in courts (Acklin & Fuger, 2016; Gianvanni & Sharman, 2017).

The next chapter will be a detailed review of the literature relevant to EPR assessment, and so the theoretical framework will also be expounded. Some of the key topics will include assessing EPR, application of EPR assessment in family court proceedings and the quality of such psychological reports, that is, exploring expert opinion on the content of expert witness psychological reports with a view to a follow up of Ireland (2012) findings and recommendations.

Ethical Considerations

To ensure ethical compliance and participant protection, several procedures were implemented in line with the university’s Research Ethics Committee (REC) standards. These considerations also safeguarded the researcher from potential liability. Key ethical practices included obtaining informed consent, maintaining confidentiality, proper data disposal, voluntary participation, and securing REC approval.

REC approval was obtained prior to participant recruitment and individual interviews, providing an additional verification layer for ethical standards. Participants were provided with an Electronic Informed Consent Form (EICF) (Appendix E) before the interviews, which were conducted via phone. The EICF was sent via email prior to the scheduled interview, and written consent was required to proceed.

All participant data were carefully protected from public access and third parties to maintain confidentiality. For example, audio recordings were anonymised using randomly assigned numbers as labels. Participant information was stored on a password-protected USB drive in the researcher’s locked office. Encoded transcripts in NVivo contained only assigned numbers, with all identifying information redacted or removed.

After five years from the dissertation’s completion, all data will be systematically disposed of to protect participants. Paper documents will be shredded until unrecognisable, and electronic data will be permanently deleted from the researcher’s computer without retaining backup copies.

Participation in the semi-structured interviews was entirely voluntary, with no pressure exerted on participants. They could withdraw from the study at any time without facing negative consequences, such as financial penalties. Data from participants who chose to withdraw were excluded from the analysis.

Results

HCPC registered, and BPS chartered practitioner psychologists (PP) are considered expert witnesses in the court if they have the credentials, training, and background relevant to the case being examined and adjudicated (Wolffram, 2018). The problem is that the quality of psychological reports used in courts as expert opinion remain poorly described and understood (Gianvanni & Sharman, 2017). The reliability of psychological assessment has also been poor, further supporting the argument of poor quality of expert psychological reports used for the court (Acklin & Fuger, 2016). Unfortunately, there is limited knowledge of psychological reports’ aspects considered weak or of poor quality (O’Neill et al., 2018). Therefore, researchers have called for more research on this topic (Gianvanni & Sharman, 2017). To answer this call and address this research problem, this study sought to explore expert psychological witnesses’ descriptions of their professional experiences to assess the state of implementation of Ireland’s (2012) recommendations regarding expert witness psychological reports.

This section reports on the results of the qualitative thematic data analysis of ten in-depth interviews with PP. The thematic analysis of ten individual in-depth interviews resulted in five themes that were attained in relation to the three research questions. These themes are listed below. Each theme will be presented as it relates to the specific research question of this study. It is important to note that to form the themes, the researcher first thoroughly reviewed According to Braun et al. (2014), the analytic starting point in deductive coding is more ‘top-down’ as the researchers’ skills to bring in existing theoretical constructs or theories providing the basis for ‘seeing’ the data. Braun and Clarke (2019) also mention that deductive coding in the thematic analysis is allowed, although it should only be used to a certain extent.

Research Question Theme Percentage  of respondents  who mentioned the theme
RQ1 Sufficient domain knowledge 100%
RQ2 Psychological traits 100%
RQ2 Cognitive skills 100%
RQ3 Decision strategy 80%
RQ3 Task suitability 70%

Theme 1: Sufficient Domain Knowledge

RQ1: How do practitioner psychologists maintain the quality of their expert psychological reports when providing expert opinion in the court?

Two major themes emerged from this second research question: psychological skills and cognitive skills. Psychological skills refer to  traits that form decision styles of many experts, including strong self-confidence, excellent communication skills, the capability to adapt to new circumstances, and a clear responsible attitude (Shanteau, 1992). More specifically, participants emphasised the importance of self-confidence and responsibility in maintaining their psychological reports’ quality.  The second theme, cognitive skills, is defined as the skills regarding relevancy, the capability for identifying rule exceptions, highly developed attention abilities, and the potential to work efficiently under stress (Shanteau, 1992). The ability to recognise rule exceptions, such as the methods that are allowable or not allowable in court is essential, as well as highly developed attention skills, and can function under stress effectively (Shanteau, 1992). Each theme represents variables of these themes, but the specific terms that were developed are noted in the thematic descriptions.

Theme 2: Self Confidence

Within this second theme, related to the second research question, the core message focuses on the importance of self-confidence in ensuring high-quality expert psychological reports (EPR). Participants emphasised the necessity of only engaging in projects where they felt competent, as insecurity could negatively impact the quality of their work (Tadei et al., 2016). A majority highlighted the significance of feeling comfortable with assignments, asserting that taking on unsuitable cases would compromise their output.

In addition to self-confidence, participants acknowledged their responsibility for completing assignments successfully. They recognised that high-quality reports result from personal accountability, which includes utilising updated tests and methods while adhering to Civil Procedure Rules (CPR). Notably, none of the participants reported using assistants, asserting that they conducted all their reports independently. Participants stressed the importance of incorporating up-to-date assessments and methods into their reports, with several expressing an ethical obligation to use the most recent versions of psychological tests (Guarnera et al., 2017).

While participants demonstrated a commitment to quality, it was noted that only half checked their reports against CPR. Some participants were either unfamiliar with the rules or believed they did not apply to their practice (Lavis & Brewer, 2017). This inconsistency highlights a need for further training and standardisation in report writing practices (Iudici et al., 2015).

Theme 3: Recognising Relevance, Utilising Precision, and Maintaining Open-Mindedness

The second theme within the second research question, and the third theme overall, concerns cognitive skills. Participants identified three significant factors related to cognitive skills that contribute to the writing of high-quality reports: (1) recognising relevance, (2) utilising precision, and (3) maintaining open-mindedness. Recognising relevant information in their reports was deemed crucial for quality report writing, as highlighted by one participant who distinguished expert witness report writing from clinical report writing, emphasising the importance of tailoring the content for a specific audience, particularly the court (McCrory et al., 2016).

All ten participants agreed on the necessity of referencing specific psychological theories in their reports. Participants noted that these theories underpin their expertise and justify the consultation of psychologists in legal contexts. As one participant articulated, the incorporation of psychological theories adds depth to understanding the individual’s injury (Bennett et al., 2017). This sentiment was echoed by others, who asserted that theories lend credibility to their opinions and safeguard against potential scrutiny in court (Katz et al., 2018).

Participants identified several advantages of employing established theories, including enhancing objectivity, providing a robust theoretical basis for opinions, and fostering open dialogue (Roberts et al., 2019). For instance, one participant remarked that using psychometric assessments allows for thorough examination of responses, which can lead to further insightful discussions (Furnham et al., 2017). Overall, participants highlighted that integrating psychological theories not only enriches their reports but also serves to protect their professional integrity in a legal context.

It is important to note that a similar number of participants highlighted negative aspects related to cognitive skills, including issues of inaccuracy, misinterpretation, time consumption, overuse, and cultural barriers. The primary concern identified was inaccuracy; five participants expressed that tests might not always yield truthful results. As one participant stated, individuals may not always respond honestly to assessments (McCrory et al., 2016). Another participant pointed out that the ease of manipulation in psychometric testing could lead to misleading presentations of results, suggesting that reliance solely on these tests could produce inaccurate conclusions (Bennett et al., 2017).

Misinterpretation was also a significant issue, as articulated by a participant who emphasised the importance of distinguishing between raw scores and interpretive insights, warning that raw scores can be misinterpreted (Katz et al., 2018). Additionally, the time required for testing was cited as a concern, with one participant noting the annoyance of lengthy assessments (Roberts et al., 2019). Cultural and language factors were also raised as potential barriers to effective assessment.

Another cognitive skill deemed essential for writing high-quality reports was precision, particularly in referencing sources and distinguishing between facts and allegations. Seven out of ten participants believed it was necessary to provide complete references for sources such as interviews, client documents, and established theories. For these participants, referencing was vital for demonstrating how conclusions were drawn and ensuring comparability with other reports (Furnham et al., 2017). In contrast, a minority preferred to give only the name and a brief description of tests and theories. One participant explained that they typically provide generic descriptions without full citations, believing that naming the tests suffices for clarity (Furnham et al., 2017). Another participant acknowledged the importance of complete references but noted they had yet to be requested in practice.

Overall, while participants recognised the value of referencing for enhancing report quality, opinions varied on the necessity of providing complete citations for tests and theories used.

In addition to referencing tests and theories, participants were asked about their views on referencing data sources, such as interviews and documents. All ten participants reported consistently referencing these sources, deeming it essential for quality report writing. They emphasised that including these references is crucial for illustrating how conclusions are drawn and providing insight into the thought process behind them. This practice not only enhances the credibility of their observations but also substantiates their clinical opinions with evidence from peer-reviewed sources.

While five participants supported the idea of referencing documents to demonstrate how conclusions were reached, one participant suggested that such references need not be included in the main body of the report and could instead be summarised at the end. This approach allows for the integration of data with the authors’ opinions without overwhelming the reader with references throughout the report.

A critical aspect of cognitive skills necessary for report writing is the ability to distinguish between facts and allegations. All ten participants emphasised the importance of this differentiation, noting the necessity of clearly separating factual evidence, such as police records or video footage, from subjective comments made during interviews. They highlighted that while there may be allegations regarding an incident, only independently verified information should be classified as factual, whereas personal disclosures should be explicitly identified as client opinions.

Open-mindedness was another vital cognitive skill identified by participants, referring to their willingness to consider multiple explanations for incidents. They agreed that presenting various possibilities is crucial, as conclusions cannot be entirely definitive. The participants noted that when substantial information is lacking, it becomes challenging to assess whether specific risk factors are present. They emphasised the importance of making it clear when evidence is insufficient to draw definite conclusions.

Additionally, participants shared examples of cases where one explanation proved insufficient. In one case involving a young man sending inappropriate images to minors, participants reflected on the complexities of understanding his behaviour. They acknowledged that while there may be multiple explanations for his actions based on available evidence, no single reason could definitively account for his behaviour.

4.4 RQ1: How do practitioner psychologists integrate the court’s feedback to maintain their expert psychological reports’ quality?

There were two themes associated with the third and last research question: decision strategy and task suitability. Within these themes, participants described how they incorporated feedback from the courts into their reports.

Theme 4: Decision Strategy for Expert Opinion Report

The fourth theme identified in the study was decision strategy, which refers to methods that systematically organise decision-making and mitigate the cognitive limitations of experts. Shanteau (1992) describes these strategies as utilising dynamic feedback, decision aids, deconstructing complex problems, and leveraging prior solutions. The analysis found that the decision strategies employed by participants included: (1) direct feedback, (2) consulting others, and (3) integrating other experts’ opinions.

Regarding feedback, most participants indicated that they had received feedback from solicitors at least once, primarily focused on clarity in their reports. They noted the importance of avoiding jargon and ensuring that their opinions were expressed clearly to a non-specialist audience. However, feedback was often described as informal and generally not detailed. Some participants reported receiving vague comments, such as general thanks for their reports, while others mentioned that feedback typically involved requests for amendments, which varied in significance.

Direct contact between the expert and the solicitor was highlighted as crucial for obtaining meaningful feedback. Participants felt that personal discussions facilitated iterative clarifications on specific details, which could be lost when communication occurred through intermediaries. While some participants received no feedback from solicitors, those who did often noted positive, albeit general, responses.

Feedback from the judiciary was noted as rare and usually positive. Some participants mentioned that informal comments from judges were received, often appreciating the thoroughness of their reports but not offering detailed evaluations. While some participants expressed a desire for more critical feedback, others reported having received no formal feedback from the judiciary at all.

Participants also reported consulting colleagues and professional bodies to enhance the quality of their work. Eight participants indicated they regularly sought input from peers to confirm their opinions and ensure their reports were well-explained. This practice was particularly valued as a means to prevent poor decision-making, especially in cases where uncertainty existed or where participants lacked expertise in specific areas.

While most participants acknowledged the value of consulting others, a few noted that they did not do so regularly, citing concerns about maintaining client anonymity. Some participants also mentioned professional guidelines as useful resources, although they did not frequently consult professional bodies directly.

Lastly, regarding the integration of other experts’ opinions into their reports, participants had mixed experiences. Some reported being specifically instructed to consider certain expert opinions, while others had not been asked to do so. In instances where participants were encouraged to consult other experts, it was often due to gaps in their own expertise, necessitating input from specialists in relevant fields.

Theme 5: Task Suitability

The fifth and final theme identified in the study was task suitability, which refers to an expert’s ability to complete a task competently. Participants integrated feedback in two primary ways: (1) specific judiciary requests and (2) guidance. The former involved clear instructions on what to include or consider in their reports, while the latter related to receiving sufficient information and support for effective report writing.

A few participants reported receiving clear instructions directly from the judiciary; however, most noted that any guidance typically came from solicitors. While some participants found such requests helpful, they cautioned that these instructions could compromise their objectivity, as they might be influenced by solicitors’ opinions rather than focusing solely on evidence-based conclusions.

Conversely, several participants valued specific instructions, as they provided clarity on the expectations for report content. These instructions often included details such as length and the necessity for a concise summary, which helped streamline the writing process and ensure satisfaction among solicitors.

Five participants discussed the importance of producing quality reports, with mixed responses about the guidance received from commissioning companies. Some participants reported positive experiences, noting that certain companies offered templates and consultation services to enhance report quality. Others, however, expressed dissatisfaction with the lack of information provided by third-party groups, which made it challenging to complete reports accurately.

Overall, there was a moderate level of satisfaction with the guidance offered by commissioning companies. Participants acknowledged that the existing structure could improve but also recognised some value in the resources provided.

The study aimed to address three research questions. In response to the first question—”How qualified are practitioners who have provided expert opinions in court through their expert psychological reports?”—it was evident that all participants were members of relevant professional bodies and regularly engaged in training to stay updated on tools, requirements, and quality improvement. This suggested a high level of qualification among participants.

Regarding the second research question—”How do practitioners maintain the quality of their psychological reports when providing expert opinions in court?”—several themes emerged. Key traits included psychological skills, self-confidence, responsibility, and cognitive abilities, such as relevance and precision. Participants emphasised the importance of taking responsibility for their work and consulting colleagues when unsure about specific cases. Opinions varied on the use of psychological tests; some participants valued tests for their role in validating reports, while others warned that reliance on tests could hinder accurate interpretation of results. Notably, a significant finding was the generally low availability of detailed feedback from solicitors and courts, indicating an area needing improvement, as such feedback could enhance report quality.

Concerning the final research question—”How do practitioner psychologists integrate the court’s feedback to maintain the quality of their psychological reports?”—findings were limited. Participants generally described the feedback from the courts as vague and mostly positive, with specific recommendations being rare. This lack of detailed feedback presented challenges for effectively incorporating it into practice.

Reflexivity

In a research study, reflexivity is generally examining one’s judgments, beliefs, and actions throughout the research and how they may affect the research. It refers to what we classify as our belief, then what we do with this information is reflexivity. Questioning our own beliefs and thoughts taken for granted is reflexivity. It is the process of drawing the researchers’ attention to be the opposite and pretending that they did not have any influence. It demands open-mindedness and willingness to take on board the fact that the researcher is an element of the research (Finlay, 1998). In this study, the researcher’s reflexivity was guided by the use of bracketing which is used throughout the entire data collection and analysis process. The bracketing process involved documenting the researchers own bias and noting this through reflexive journaling. This process was completed throughout the entire process of this study.

Discussion

Previous research (e.g., Ireland, 2012) emphasised the lack of standards in utilising psychological expert witnesses; therefore, this study explored the expert psychological witnesses’ descriptions of their professional experiences to understand how current EPR are prepared concerning their reports’ quality and CPR for the courts. The respondents were asked about their perceptions about the qualifications needed for the job, the psychological traits that they perceive as necessary when performing their role, and the factors that affect their decision strategies. The interview data were analysed through a thematic analysis. Brown et al. (2016) highlighted the importance of having the proper psychological skills and knowledge among expert witnesses. This study showed that the participants had varying qualifications when assessed through their domain knowledge, training attended, and pertinent organisations’ memberships. There are also limited standards in the qualifications needed for psychologists to be expert witnesses. Studies have also shown that besides their curriculum vitae and acquiring proper training, some participants were selected based on their reputations within the industry, further enforcing a potential lack of standards.

The participants also highlighted self-confidence, personal responsibility, the ability to distinguish facts from allegations, and open-mindedness as factors that influence their reports’ perceived quality. Prior studies have similarly shown that psychological factors can indeed influence expert testimony (Gudjonsson, 1993). Previous findings have also suggested the importance of objectivity in expert report writing, stating that the expert must ensure that he/she refrains from suggesting any form of legal knowledge in his/her reports (Brown et al., 2016) and not interfere with the legal function of the judge. Results also showed the importance of clarity and knowing when and when not to reference theories in their reports. Austin et al. (2015) support that expert witnesses must explain information clearly to their juries; otherwise, the information they provide may be dismissed by the jury when making their decisions.

Ireland’s (2012) recommendations included placing importance on training the judiciary and what to expect from psychological assessments; results from the current study would also make such recommendation for a robust all-around model of qualification, EPR training and court feedback. Based on the results, PP, who serve as expert witnesses, must be aware of their responsibility, role and courts expectations concerning not only their professional psychological training but a robust expert witness training and the CPR guidelines to ensure that they align expectations on the information needed in the written report. Some participants noted that feedback is more likely to be informal and rarely detailed enough to be integrated into their work. Further efforts need to be placed into ensuring that expert witnesses are provided useful and meaningful feedback and guidance on their work. This study is limited by some of the factors that commonly limit descriptive qualitative research. By the nature of most qualitative research, potential biases present in the participants’ responses may threaten the data’s internal validity and transferability.

Limitations of the Study

The current research results are limited to the participating expert psychological witnesses’ experiences while providing their expert opinion in the court based on psychological reports. The sample was chosen through an opportunity sampling technique, which means that the participants were selected based on the recruitment process’s potential ease. The issues that were noted, were participant gender, as 6 males and 4 females, had they provided at least 5 EPR, and the area of their practitioner psychology, as they classified themselves as, 5 forensic psychologist, 4 clinical psychologist and 1 neuropsychologist. Whilst they were not asked about the no of EPR they had completed, but of they had done at least 5 EPR. The issues that need to be noted in further research are date they qualified in their primary role, date they first did an expert witness report, specific area of expertise, the number of EW reports completed to date, how often they have had to attend court as an EW.

The sample size chosen for this study was also relatively small. Moreover, the psychologists recruited were mainly bounded within the geographic setting of this research study. These factors may lead to the study’s poor generalisability (Skovdal & Cornish, 2015). Therefore, the findings cannot be validly used as a definitive basis for the state of expert opinions in all family court proceedings across the United Kingdom. This study did not focus on the other people involved in the court proceedings and did not aim to explore further the participants’ common affiliations than they are willing to share during the interviews. The study’s main aim was to describe the participants’ professional experiences and not produce theories. The writer did not aim to provide information that could lead to conclusions that suggest direct cause and effect for the current state of psychological report qualities.

The interviews conducted with the PP focused on the participants’ qualifications, writing reports, practices and court feedback rather than their perceptions of the influence of their reports on the court proceedings. Thus, the actual flow and outcomes of their involvement in cases within the family court are not covered in current research; instead, this study is focused on their expert witnesses’ processes during writing, presenting, and receiving feedback on their work. The study’s credibility was maintained by ensuring that the study results are based on the participants’ honest and accurate depictions of their experiences when participating as expert witnesses in family court proceedings. However, due to the nature of the descriptive qualitative study, measures were taken to remove the participants’ potential biases when questioned about their work quality. Hence, in turn, it could have influenced the results and internal validity of the research. This same limitation may threaten the transferable instead of the data that was collected from the interviews.

Another limitation includes the study’s lack of insight into the actual appraisal of the participants’ professional work in family court proceedings. This lack of insight prevents the study from making direct statements on whether or not the participants’ activities and processes yield positive or negative results. While the results of the current study can be used to develop psychological assessment standards within family court proceedings, the results may not be generalisable to court proceedings in general, as the study’s recruitment process was highly focused on family court proceedings.

Implications for Practice

This study explored expert psychological witnesses’ descriptions of their professional experiences to understand the state of psychological expert witness reports, and which factors the participants perceived to be most influential to their work quality. This study’s findings can help inform the development of standards and policies applied when involving expert psychological witnesses in family court proceedings by improving the standardisation practices and assuring that experts included in these proceedings are qualified. Findings show that systematic changes are needed to improve the quality of the use of psychological reports in the legal context, and the results of this study can be used as the basis for such improvements in the legal application of clinical psychology (Acklin & Fuger, 2016). By providing a description of the current state of implementation within family courts scoped in this study and having a clear understanding of the participants’ perceptions regarding the necessary qualifications, mindsets, practices, and strategies needed to perform their jobs well, this study can yield positive social implications for family courts. The participants have varying psychological expert witness report qualities (Ireland, 2012; O’Neill et al., 2018). The positive social implications of this study may include an improved understanding of how to ensure that expert witnesses are qualified for their positions and provide the utmost objective and quality reports. Thus, reducing the chances of error or mistrials that may result from underqualified individuals and poorly constructed reports.

Several researchers have also emphasised the importance of examining the various aspects of psychological report writing to identify common potential weaknesses and understand the nature of the quality of the reports produced (Gianvanni et al., 2017; O’Neill et al., 2018). This study has positive practical implications by providing a better understanding of the current practices and processes of psychological report writing currently being used when providing expert opinion in the court. The results can be used to facilitate the adoption of uniform practices in proceedings within the family court concerning the development of high-quality reports. This study’s results can also have positive social implications by helping initiate the evaluation of current standards and practices and begin the discussion on bettering and standardising the entire process from recruiting the most qualified people to evaluating the resulting evidence.

Further Studies

One way for researchers to extend upon the results of this study is to provide a more in-depth understanding of the activities and psychological traits of psychological expert witnesses and their perceived influence on the quality of their psychological reports and provide an analysis of formal appraisals of their reports. The current research described the current state of the psychological expert witness report-writing for family courts within the United Kingdom regarding the qualifications necessary.

The psychological traits affect their professional work and their decision strategies when writing reports. However, there are currently no systematic approaches available for measuring the quality of expert witness reports written by psychologists. While the information provided by the participants are useful in developing an idea of the current state of implementation, there needs to be a better understanding of how these reports are received by the court and the juries for whom they are written.

Although this study explored the participants’ perceptions regarding the necessary qualification needed to be an expert witness, there may be more excellent knowledge that can be gathered by exploring the other agents involved, such as the prosecution, the defence, or the judge. This descriptive study can benefit from greater insight into the different angles within the family court regarding expert reports’ quality. Assessing various parties’ viewpoints can also help researchers understand this study’s theoretical framework’s applicability in actual practice. A larger sample size can also be used to help provide a more holistic description of the phenomenon being observed in the study. Having a larger group of interviewees may also lead to more generalisable findings. Further studies can also be done to explore the different types of court proceedings that use psychological expert witnesses using the same approach and guided by the same theoretical framework.

However, despite these findings there remains some issues that require further research. Firstly, is the reasonings that lay underneath providing feedback to these expert witnesses. The issues may be founded on the fact that the presence of the PP qualifications is not deemed necessary to provide additional feedback. However, there is a need to consider these limitations more closely as these may create improvements in standardisations among PP. A secondary issue lies in the fact that while some individuals are asked to be expert witnesses that their qualifications may not meet the desired standards. The reasoning for this may be associated with a need to renew our standards as well as the critical examination of the admissibility of credentials within the court. Though these are problematic concerns for expert witnesses, these findings may also serve to renew assessments on policy and research level. Researchers that explore these variables may find that there is ample room for improving the standardisation of practices.

Given the benefits of expert psychologist training, there is a need to explore further standardising the training needed by psychologists who are prospective expert witnesses. Ireland (2012) warned against using “risk assessments” that do not conform to expected standards, and she stated that courts should not accept opinions based on unstructured clinical opinions. It is also evident from the decision strategy components that there are considerable variations in all three dimensions of feedback, consulting colleagues, and instructed to use other expert opinions. Hence, further research would be to develop a coherent model of decision strategy for PP to contribute to the much-needed improvement of the contents of EPR. Based on the result of the current study, there are variations in expert witnesses’ approach to precision in their reports, which might be attributable to the lack of standards enforced on the data collection and analysis approaches they may use.

The Law Commission for England and Wales suggested a consultation paper (No.190) for a criterion to be adopted like the Daubert Standard to assist and upgrade the rules of evidence related to the admissibility of psychological evidence, which proposed that “judges should only admit expert-opinion evidence after determining that it is sufficiently reliable.” The Law Commission also examined Daubert as a source of more accurate, reliable, admissibility criteria (Edmond & Roach, 2011). It would be recommended for further research to investigate the EPR writing model as related to what the consultation paper (No.190) is proposing concerning the admissibility of reports so that both PP and the solicitors and judges are clear about what is expected from their profession (i.e., psychology and law) as well as what they ‘clearly’ expect from each other’s profession.

Finally, there is little research, if any, on-court feedback for the PP regarding EPR. It is therefore also recommended that further research is needed to establish a model of feedback for PP to make clear what is expected by our legal professionals. Other questions are also raised that need to be taken into consideration for further research include, first, why do people agree to expert witness work when their qualifications are not suitable? Second, why are people without the correct qualifications asked?

Conclusion

The current research was a follow-up of Ireland (2012) study that described the current state of implementation of psychological expert witness reporting in family court proceedings in the UK. However, there is a need to explore different perspectives and the standardised appraisal of the resulting psychological reports to ensure that a complete understanding of the process is achieved for the betterment of the function of family courts in the UK and make recommendations for extending the current state of the research on psychologists as expert witnesses. The recommendations for research include assessing (a) provide a more in-depth understanding of the activities and psychological traits of psychological expert witnesses and their perceived influence on the quality of their psychological reports and provide an analysis of formal appraisals of their reports, (b) there needs to be a better understanding of how these reports are received by the court and the juries for whom they are written, (c) assessing various parties’ viewpoints can also help researchers understand this study’s theoretical framework’s applicability in actual practice, (d) reasonings that lay underneath providing feedback to these expert witnesses, (e) there is a need to explore further standardising the training needed by psychologists who are prospective expert witnesses, (f) develop a coherent model of decision strategy for PP to contribute to the much-needed improvement of the contents of EPR and (g) further research is needed to establish a model of feedback for PP to make clear what is expected by our legal professionals.

Bibliography

  1. Acklin, M. W., & Fuger, K. (2016). Assessing field reliability of forensic decision making in criminal court. Journal of Forensic Psychology Practice16(2), 74-93.
  2. Anderson, C. A., & Bushman, B. J. (1997). External validity of “trivial” experiments: The case of laboratory aggression. Review of General Psychology1(1), 19-41.
  3. Austin, J. L., & Kovera, M. B. (2015). Cross-examination educates jurors about missing control groups in scientific evidence. Psychology, Public Policy, and Law, 21(3), 252.
  4. Bach, L. J., & Gudjonsson, G. H. (1998). Evaluation study of lawyers’ satisfaction with expert witness reports. Expert Evidence, 6(4), 261-271.
  5. Bahran, N., & Townsend, J.C. (2019, August). The competence of experts in criminal proceedings. Counsel magazine. https://www.counselmagazine.co.uk/articles/the-competence-of-experts-in-criminal-proceedings.
  6. Bartol, C. R., & Bartol, A. M. (2017). Introduction to forensic psychology: Research and application. Sage Publications.
  7. Bornstein, B. H. (1999). The ecological validity of jury simulations: Is the jury still out?. Law and human Behavior23(1), 75.
  8. Borum, R., & Grisso, T. (1996). Establishing standards for criminal forensic reports: An empirical analysis. Bulletin of the American Academy of Psychiatry & the Law.
  9. British Psychological Society (2021) Directory of Expert Witnesses. Retrieved from https://www.bps.org.uk/lists/EWT
  10. Bronstein, D. A. (2016). Law for the expert witness. CRC Press.
  11. Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health11(4), 589-597.
  12. Braun, V., Clarke, V., & Terry, G. (2014). Thematic analysis In Rohleder, P., & Lyons, A. Qualitative research in clinical and health psychology, 95-113.
  13. Carter, P. (2012, March 13). How competent are expert witnesses? 4 News.
  14. Convery, P., McCabe, N. (2019, November 22). False Witness. The Law Society Gazette, 34.
  15. Curci, A., Lanciano, T., Curtotti, D., & Sartori, G. (2020). Lessons for the courtroom from the study of Flashbulb memory: an integrative review. Memory28(3), 441-449.
  16. Cutler, B. L. (2009). Expert testimony on the psychology of eyewitness identification. Oxford University Press.
  17. Cutler, B. L., & Kovera, M. B. (2011). Expert psychological testimony. Current Directions in Psychological Science, 20(1), 53-57.
  18. Cypress, B. S. (2017). Rigor or reliability and validity in qualitative research: Perspectives, strategies, reconceptualization, and recommendations. Dimensions of Critical Care Nursing36(4), 253-263.
  19. Farndon, H. (2016). HCPC registered psychologists in the UK. The British Psychological Society.
  20. Faust, D., & Ziskin, J. (1988). The expert witness in psychology and psychiatry. Science, 241(4861), 31-35.
  21. Grisso, T. (2010). Guidance for improving forensic reports: A review of common errors. Open Access Journal of Forensic Psychology, 2, 102-115.
  22. Guarnera, L. A., Murrie, D. C., & Boccaccini, M. T. (2017). Why do forensic experts disagree? Sources of unreliability and bias in forensic psychology evaluations. Translational Issues in Psychological Science, 3(2), 143–152.
  23. Gudjonsson, G. H. (1993). Confession evidence, psychological vulnerability and expert testimony. Journal of Community & Applied Social Psychology, 3(2), 117-129.
  24. Gutheil, T. G., & Gabbard, G. O. (1998). Misuses and misunderstandings of boundary theory in clinical and regulatory settings. American Journal of Psychiatry155(3), 409-414.
  25. Heilbrun, K. (2001). Testifying Effectively. Principles of Forensic Mental Health Assessment, 255-282.
  26. Hosch, H. M., Jolly, K. W., Schmersal, L. A., & Smith, B. A. (2009). Expert psychology testimony on eyewitness identification: Consensus among experts. Expert testimony on the psychology of eyewitness identification, 143-164.
  27. Ireland, J. L. (2012). Evaluating expert witness psychological reports: Exploring quality. Unpublished Report Presented to the Family Justice Council.
  28. Iudici, A., Salvini, A., Faccio, E., & Castelnuovo, G. (2015). The clinical assessment in the legal field: an empirical study of bias and limitations in forensic expertise. Frontiers in psychology, 6, 1831.
  29. Jackson, R., & Roesch, R. (Eds.). (2015). Learning forensic assessment: Research and practice. Routledge.
  30. Kacperska, I., Heitzman, J., Bąk, T., Leśko, A. W., & Opio, M. (2016). Reliability of repeated forensic evaluations of legal sanity. International Journal of Law and Psychiatry44, 24-29.
  31. Kim, H., Sefcik, J. S., & Bradway, C. (2017). Characteristics of qualitative descriptive studies: A systematic review. Research in nursing & health40(1), 23-42.
  32. Kovera, M.B., Russano, M.B. and McAuliff, B.D., 2002. Assessment of the commonsense psychology underlying Daubert: Legal decision makers’ abilities to evaluate expert evidence in hostile work environment cases. Psychology, Public Policy, and Law8(2), p.180.
  33. Kovera, M. B., & McAuliff, B. D. (2000). The effects of peer review and evidence quality on judge evaluations of psychological science: Are judges effective gatekeepers?. Journal of Applied Psychology85(4), 574.
  34. Lambert, V. A., & Lambert, C. E. (2012). Qualitative descriptive research: An acceptable design. Pacific Rim International Journal of Nursing Research16(4), 255-256.
  35. Lavis, T., & Brewer, N. (2017). Effects of a proven error on evaluations of witness testimony. Law and human behavior41(3), 314.
  36. Law, J., Martin, E.A. (2009). A Dictionary of Law (7 ed.). Oxford University Press.
  37. Martire, K. A., & Montgomery-Farrer, B. (2020). Judging experts: Australian magistrates’ evaluations of expert opinion quality. Psychiatry, Psychology and Law, 1-13.
  38. Mason, P. ed., 2012. Criminal visions. Routledge.
  39. McAuliff, B. D., & Bull Kovera, M. (2012). Do jurors get what they expect? Traditional versus alternative forms of children’s testimony. Psychology, Crime & Law18(1), 27-47.
  40. Melton, G. B., Petrila, J., Poythress, N. G., Slobogin, C., Otto, R. K., Mossman, D., & Condie, L. O. (2018). Psychological evaluations for the courts: A handbook for mental health professionals and lawyers. Guilford Publications.
  41. Neal, T. M., Slobogin, C., Saks, M. J., Faigman, D. L., & Geisinger, K. F. (2020). Psychological assessments in legal contexts: Are courts keeping “junk science” out of the courtroom?. Psychological Science in the Public Interest20(3), 135-164.
  42. Newman, S. A. (1994). Assessing the quality of expert testimony in cases involving children. The Journal of Psychiatry & Law22(2), 181-234.
  43. O’Neill, A. T., Bussey, K., Lennings, C. J., & Seidler, K. M. (2018). The views of psychologists, lawyers, and judges on key components and the quality of child custody evaluations in Australia. Family Court Review56(1), 64-78.
  44. O’Leary v Mercy University Hospital Cork Ltd [2019] in Bahran, N., & Townsend, J.C. (2019, August). The competence of experts in criminal proceedings. Counsel magazine. https://www.counselmagazine.co.uk/articles/the-competence-of-experts-in-criminal-proceedings.
  45. Otgaar, H., de Ruiter, C., Howe, M. L., Hoetmer, L., & van Reekum, P. (2017). A case concerning children’s false memories of abuse: Recommendations regarding expert witness work. Psychiatry, Psychology and Law24(3), 365-378.
  46. RANGASWAMY, K., 2017. FORENSIC PSYCHOLOGICAL FUNCTION OF CLINICAL PSYCHOLOGIST IN THE FIELD OF MENTAL HEALTH. Psychological Researches60(1), pp.3-9.
  47. Reed, L. (2012, April 12). Experts upon experts. Pink Tape, A Blog from The Family Bar. http://www.pinktape.co.uk/courts/experts-upon-experts/
  48. Reed, L. (2016, June 6). The Ireland Report and the Fitness to Practice Panel in respect of Professor Ireland. Transparency Project. http://www.transparencyproject.org.uk/the-ireland-report-and-the-fitness-to-practice-panel-in-respect-of-professor-ireland/
  49. Robertson, L., & Broadhurst, K. (2019). Introducing social science evidence in family court decision-making and adjudication: Evidence from England and Wales. International Journal of Law, Policy and the Family33(2), 181-203.
  50. Shanteau, J. (1992). Competence in experts: The role of task characteristics. Organizational Behavior and Human Decision Processes53(2), 252-266.
  51. Shapiro, D. L., Mixon, L., Jackson, M., & Shook, J. (2015). Psychological expert witness testimony and judicial decision making trends. International journal of law and psychiatry, 42, 149-153.
  52. Skovdal, M., & Cornish, F. (2015). Qualitative research for development: a guide for practitioners. Practical Action Publishing.
  53. Slingo, J. (2019, June 17). New focus: Inexpert evidence. The Law Society Gazette. https://www.lawgazette.co.uk/news-focus/news-focus-inexpert-evidence/5070629.article
  54. Sutton, J., & Austin, Z. (2015). Qualitative research: data collection, analysis, and management. The Canadian Journal of Hospital Pharmacy68(3), 226.
  55. Tadei, A., Finnilä, K., Reite, A., Antfolk, J., & Santtila, P. (2016). Judges’ capacity to evaluate psychological and psychiatric expert testimony. Nordic Psychology68(3), 204-217.
  56. The British Psychological Society, 2016. Psychologists as Expert Witnesses – Guidelines and Procedures for England, Wales, Scotland, and Northern Ireland. The British Psychological Society.
  57. The British Psychological Society, 2018. Expert witness survey results. The British Psychological Society, 30, 19-20.
  58. Tucker, J., 2014. Acting as an expert witness. Forensic Odontology: An Essential Guide, pp.23-48.
  59. Vaismoradi, M., Turunen, H., & Bondas, T. (2013). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & Health Sciences15(3), 398-405.
  60. Weissman, D. M. (2012). Making a killing: Femicide, free trade and la frontera edited by Alicia Gaspar de Alba with Georgina Guzman. Springer, 10, 414-416.
  61. Wettstein, R. M. (2005). Quality and quality improvement in forensic mental health evaluations. Journal of the American Academy of Psychiatry and the Law Online33(2), 158-175.
  62. Wolffram, H. (2018). Psychological expertise in the courtroom: The Frenzel trial. In Forensic Psychology in Germany. Palgrave Macmillan, Cham, 155-194.
  63. Wygant, D. B., & Lareau, C. R. (2015). Civil and criminal forensic psychological assessment: Similarities and unique challenges. Psychological Injury and Law8(1), 11-26.
  64. Young, G., & Brodsky, S. L. (2016). The 4 Ds of forensic mental health assessments of personal injury. Psychological injury and law9(3), 278-281.
  65. Zumbach, J., & Koglin, U. (2015). Psychological evaluations in family law proceedings: A systematic review of the contemporary literature. Professional Psychology: Research and Practice46(4), 221-249.
  66. Zwartz, M. (2018). Report writing in the forensic context: Recurring problems and the use of a checklist to address them. Psychiatry, Psychology and Law25(4), 578-588.