Skip to main content
Advertisement
  • Loading metrics

ACCORD (ACcurate COnsensus Reporting Document): A reporting guideline for consensus methods in biomedicine developed via a modified Delphi

Abstract

Background

In biomedical research, it is often desirable to seek consensus among individuals who have differing perspectives and experience. This is important when evidence is emerging, inconsistent, limited, or absent. Even when research evidence is abundant, clinical recommendations, policy decisions, and priority-setting may still require agreement from multiple, sometimes ideologically opposed parties. Despite their prominence and influence on key decisions, consensus methods are often poorly reported. Our aim was to develop the first reporting guideline dedicated to and applicable to all consensus methods used in biomedical research regardless of the objective of the consensus process, called ACCORD (ACcurate COnsensus Reporting Document).

Methods and findings

We followed methodology recommended by the EQUATOR Network for the development of reporting guidelines: a systematic review was followed by a Delphi process and meetings to finalize the ACCORD checklist. The preliminary checklist was drawn from the systematic review of existing literature on the quality of reporting of consensus methods and suggestions from the Steering Committee. A Delphi panel (n = 72) was recruited with representation from 6 continents and a broad range of experience, including clinical, research, policy, and patient perspectives. The 3 rounds of the Delphi process were completed by 58, 54, and 51 panelists. The preliminary checklist of 56 items was refined to a final checklist of 35 items relating to the article title (n = 1), introduction (n = 3), methods (n = 21), results (n = 5), discussion (n = 2), and other information (n = 3).

Conclusions

The ACCORD checklist is the first reporting guideline applicable to all consensus-based studies. It will support authors in writing accurate, detailed manuscripts, thereby improving the completeness and transparency of reporting and providing readers with clarity regarding the methods used to reach agreement. Furthermore, the checklist will make the rigor of the consensus methods used to guide the recommendations clear for readers. Reporting consensus studies with greater clarity and transparency may enhance trust in the recommendations made by consensus panels.

Background

Evidence-based medicine relies on (1) the best available evidence; (2) patients’ values, preferences, and knowledge; and (3) healthcare professionals’ experience and expertise [1,2]. When healthcare professionals need to make clinical decisions, or when recommendations or guidance are needed and there is uncertainty on the best course of action, such as when evidence is emergent, inconsistent, limited, or absent—not least in rapidly evolving fields such as pandemics [3]—the collation and dissemination of knowledge, experience, and expertise becomes critical. Coordinating this process may be best achieved through the use of formal consensus methods [4] such as those described in Table 1.

thumbnail
Table 1. A selection of common consensus methods used in healthcare-related activities or research.

https://doi.org/10.1371/journal.pmed.1004326.t001

Consensus methods are widely applied in healthcare (Table 2). However, the specific method has the potential to affect the result of a consensus exercise and shape the recommendations generated. In addition, the expertise needed to contribute to the consensus process will vary depending on the research subject, and a range of participants may be required, including, but not limited to, clinical guideline developers, clinical researchers, healthcare professionals, epidemiologists, ethicists, funders, journal editors, laboratory specialists, medical publication professionals, meta-researchers, methodologists, pathologists, patients and carers/families, pharmaceutical companies, public health specialists, policymakers, politicians, research scientists, surgeons, systematic reviewers, and technicians.

thumbnail
Table 2. Examples of applications of consensus methods in healthcare-related research.

https://doi.org/10.1371/journal.pmed.1004326.t002

Consensus obtained from a group of experts using formal methods is recognized as being more reliable than individual opinions and experiences [1618]. Consensus methods help to overcome the challenges of gathering opinions from a group, such as discussions being dominated by a small number of individuals, peer pressure to conform to a particular opinion, or the risk of group biases affecting overall decision-making [4].

Despite their critical role in healthcare and policy decision-making, consensus methods are often poorly reported [19]. Generic problems include inconsistency and lack of transparency in reporting, as well as more specific criticisms such as lack of detail regarding how participants or steering committee members were selected, missing panelist background information, no definition of consensus, missing response rates after each consensus round, no description of level of anonymity or how anonymity was maintained, and a lack of clarity over what feedback was provided between rounds [19].

Reporting guidelines can enhance the reporting quality of research [2022], and the absence of a universal reporting guideline for studies using consensus methods may contribute to their well-documented suboptimal reporting quality [5,19,2325]. A systematic review found that the quality of reporting of consensus methods in health research was deficient [19], and a methodological review found that articles that provided guidance on reporting Delphi methods vary widely in their criteria and level of detail [25]. The Conducting and Reporting Delphi Studies (CREDES) guideline was designed to support the conduct and reporting of Delphi studies, with a focus on palliative care [26]. The 23-item AGREE-II instrument [27], which is widely used for reporting clinical practice guidelines, and COS-STAR for reporting core outcome set development [28], both contain a very limited number of items related to consensus.

Therefore, a comprehensive guideline is needed to report the numerous methods available to assess and/or guide consensus in medical research. The ACcurate COnsensus Reporting Document (ACCORD) reporting guideline project was initiated to fulfill this need. We followed EQUATOR Network–recommended best practices for reporting guideline development, which included a systematic review and consensus exercise. Our aim was to develop a new tool, applicable worldwide, that will facilitate the rigorous and transparent reporting of all types of consensus methods across the spectrum of health research [29]. A comprehensive reporting guideline will enable readers to understand the consensus methods used to develop recommendations and therefore has the potential to positively impact patient outcomes.

Methods

Scope of ACCORD

ACCORD is a meta-research project to develop a reporting guideline for consensus methods used in health-related activities or research (Table 2) [29]. The guideline was designed to be applicable to simple and less structured methods (such as consensus meetings), more systematic methods (such as nominal group technique or Delphi), or any combination of methods utilized to achieve consensus. Therefore, the ACCORD checklist should be applicable to work involving any consensus methods. In addition, although ACCORD has been structured to help reporting in a scientific manuscript (with the traditional article sections such as introduction, methods, results, and discussion), the checklist items can assist authors in writing other types of text describing consensus activities.

ACCORD is a reporting guideline that provides a checklist of items that we recommend are included in any scientific publication in healthcare reporting the results of a consensus exercise. However, it is not a methodological guideline. It is not intended to provide guidance on how researchers and specialists should design their consensus activities, and it makes no judgment on which method is most appropriate in a particular context. Furthermore, ACCORD is not intended to be used for reporting research in fields outside health, such as social sciences, economics, or marketing.

Study design, setting, and ethics

The ACCORD project was registered prospectively on January 20, 2022 on the Open Science Framework [30] and the EQUATOR Network website [31], and received ethics approval from the Central University Research Ethics Committee at the University of Oxford (reference number: R81767/RE001). The ACCORD protocol has been previously published [29] and followed the EQUATOR Network recommendations for developing a reporting guideline [32,33], starting with a systematic review of the literature [19], followed by a modified Delphi process. In a planned change to the Delphi method as originally formulated, the preliminary list for voting was based on the findings of this systematic review rather than initial ideas or statements from the ACCORD Delphi panel, although the panel could suggest items during the first round of voting. In addition, the ACCORD Steering Committee made final decisions on item inclusion and refined the checklist wording as described below.

ACCORD Steering Committee

WTG and NH founded the ACCORD project, seeking endorsement from the International Society of Medical Publication Professionals (ISMPP) in April 2021. ISMPP provided practical support and guidance on the overall process at project outset but was not involved in checklist development. The ACCORD Steering Committee, established over the following months, was multidisciplinary in nature and comprised researchers from different countries and settings. Steering Committee recruitment was iterative, with new members invited as needs were identified by the founders and existing committee, to ensure inclusion of the desired range of expertise or experience. Potential members were identified via ISMPP, literature research, professional connections, and network recommendations. When the protocol was submitted for publication, the Steering Committee had 11 members (WTG, PL, EJvZ, AP, CCW, DT, KG, APH, NH, and Robert Matheis [RM] from ISMPP). Bernd Arents joined the Steering Committee in July 2021 but left in December of that year, as did RM in August 2022, both citing an excess of commitments as their reason for stepping down. Patient partners were invited as Delphi panelists. Paul Blazey joined the Steering Committee in September 2022 as a methodologist to support the execution of the ACCORD Delphi process and provide additional expertise on consensus methods.

The final Steering Committee responsible for the Delphi process and development of the checklist had members working in 4 different countries: Canada, United Kingdom, United States of America, and the Netherlands. A wide range of professional roles was represented by the Steering Committee with several members bringing experience from more than one area including clinician practitioners (medical doctor, physical therapist), methodologists (consensus methodologist, research methodologist, expert in evidence synthesis), medical publication professionals (including those working in the pharmaceutical industry), journal editors, a representative of the EQUATOR Network, and a representative of the public (S1 Text).

Protocol development

The ACCORD protocol was developed by the Steering Committee before the literature searches or Delphi rounds were commenced and has been published previously [29]. An overview of the methods used, together with some amendments made to the protocol during the development of ACCORD in response to new insights, is provided below.

Systematic review and development of preliminary checklist

A subgroup of the Steering Committee conducted a systematic review with the dual purpose of identifying existing evidence on the quality of reporting of consensus methods and generating the preliminary draft checklist of items that should be reported [19]. The systematic review has been published [19] and identified 18 studies that addressed the quality of reporting of consensus methods, with 14 studies focused on Delphi only and 4 studies including Delphi and other methods [19]. A list of deficiencies in consensus reporting was compiled based on the findings of the systematic review. Items in the preliminary checklist were subsequently derived from the systematic review both from the data extraction list (n = 30) [19] and from other information that was relevant for reporting consensus methods (n = 26) [19].

Next, the Steering Committee voted on whether the preliminary checklist items (n = 56) should be included in the Delphi via 2 anonymous online surveys conducted using Microsoft Forms (See S2 Text). There were 5 voting options: “Strongly disagree,” “Disagree,” “Agree,” “Strongly agree,” and “Abstain/Unable to answer.” NH processed the results in Excel, and WTG provided feedback and therefore neither voted. Items that received sufficient support (i.e., >80% of respondents voted “Agree”/“Strongly agree”) were included in the Delphi, while the rest were discussed by the Steering Committee for potential inclusion or removal. During the first survey, Steering Committee members could propose additional items based on their knowledge and expertise. These new items were voted on in the second Steering Committee survey. Upon completion of this process, the Steering Committee approved and updated the preliminary draft checklist, which was then prepared for voting on by the Delphi panel. Items were clustered or separated as necessary for clarity.

Delphi panel composition

Using an anonymous survey (June 9–13, 2022), the Steering Committee voted on the desired profile of Delphi panelists for the ACCORD project. There was unanimous agreement that geographic representation was important, and the aim was to recruit from all continents (thereby covering both Northern and Southern hemispheres) and include participants from low-, middle-, and high-income countries to account for potential differences in cultural and ideological ways of reaching agreement. The aim was to include a broad range of participants: clinicians, researchers experienced in the use of consensus methods and in clinical practice guideline development, patient advocates, journal editors, publication professionals and publishers, regulatory specialists, public health policymakers, and pharmaceutical company representatives. As described in the ACCORD protocol [29], there are no generally agreed standards for the panel size in Delphi studies, although panels of 20 to 30 are common. The target panel size (approximately 40 panelists) was therefore guided by the desired representation and to ensure an acceptable number of responses (20, assuming a participation rate of 50%) in the event of withdrawals or partial completion of review.

Delphi panel recruitment

Potential participants for the Delphi panel were identified in several ways: from the author lists of publications included in the systematic review, from invitations circulated via an EQUATOR Network newsletter (October 2021) [34] and at the European Meeting of ISMPP in January 2022, and by contacting groups potentially impacted by ACCORD (e.g., the UK National Institute for Health and Care Excellence [NICE]). Individuals were also invited to take part through the ACCORD protocol publication [29], and the members of the Steering Committee contacted individuals in their networks to fill gaps in geographical or professional representation. To minimize potential bias, none of the Steering Committee participated in the Delphi panel.

Invitations were issued to candidate panelists who satisfied the inclusion criteria. While participants were not generally asked to suggest other panel members, in some cases, invitees proposed a colleague to replace them on the panel. Only the Steering Committee members responsible for administering the Delphi had access to the full list of ACCORD Delphi panel members. Panelists were invited by email, and reminder emails were sent to those who did not respond. Out of the 133 panelists invited, 72 agreed to participate. No panelists or Steering Committee members were reimbursed or remunerated for taking part in the ACCORD project.

Planned Delphi process

The Delphi method was chosen to validate the checklist, in line with recommendations for developing reporting guidelines [32]. A 3-round Delphi was planned to allow for iteration, with the option to include additional rounds if necessary. Panelists who agreed to take part received an information pack containing an introductory letter, a plain language summary, an informed consent statement, links to the published protocol and systematic review, and the items excluded by the Steering Committee (see S3 Text). Survey materials were developed by PL and PB in English and piloted by WTG and NH. Editorial and formatting changes were made following the pilot stage to optimize the ease of use of the survey. In an amendment to the protocol, the order of candidate items was not randomized within each manuscript section. The Jisc Online Survey platform (Jisc Services, Bristol, United Kingdom) was used to administer all Delphi surveys, ensuring anonymity through automatic coding of participants. Panelists were sent reminders to complete the survey via the survey platform, and one email reminder was sent to panelists the day before the deadline for each round.

The Delphi voting was modified to offer 5 voting options: “Strongly disagree,” “Disagree,” “Neither agree nor disagree,” “Agree,” and “Strongly agree.” Votes of “Neither agree nor disagree” were included in the denominator. The consensus threshold was defined a priori as ≥80% of a minimum of 20 respondents voting “Agree” or “Strongly agree.” Reaching the consensus threshold was not a stopping criterion. For inclusion in the final checklist, each item was required to achieve the consensus criteria following at least 2 rounds of voting. This ensured that all items had the opportunity for iteration between rounds (a central tenet of the Delphi method) [6] and enabled panelists to reconsider their voting position in light of feedback from the previous round.

In Round 1, panelists had the opportunity, anonymously, to suggest new items to be voted on in subsequent rounds. Panelists were also able to provide anonymous free-text comments in each round to add rationale for their chosen vote or suggest alterations to the item text. After each voting round, the comments were evaluated and integrated by WTG, PL, PB, and NH and validated by the Steering Committee. If necessary, semantic changes were made to items to improve clarity and concision.

Feedback given to participants between rounds included the anonymized total votes and the percentage in each category (see example in S4 Text) to allow panelists to assess their position in comparison with the rest of the group, as well as the relevant free-text comments on each item. Items that did not achieve consensus in Rounds 1 and 2 were revised or excluded based on the feedback received from the panelists. Items that were materially altered (to change their original meaning) were considered a new item. All wording changes were recorded. Panelists received a table highlighting wording changes as part of the feedback process so that they could see modifications to checklist items (for example feedback documents, see S5 Text).

Items reaching consensus over 2 rounds were removed from the Delphi for inclusion in the checklist. Items achieving agreement in Round 1, which then fell into disagreement in Round 2 were considered to have “unstable” agreement. These unstable items were revised based on qualitative feedback from the panel and were included for revoting in Round 3.

Steering Committee checklist finalization process

Consistent with the protocol [29], following completion of the Delphi process, the Steering Committee was convened for a series of three 2-hour virtual workshops (March 7, 14, and 16, 2023) to make decisions and finalise the checklist. For each item, WTG, PL, PB, and NH presented a summary of voting, comments received, and a recommended approach. The possible recommended approaches are shown in S6 Text.

All recommendations (for example, to keep approved items, confirm exclusion of rejected items, etc.) were followed by an explanation of why WTG, PL, PB, or NH felt this would be the most appropriate action and a discussion between Steering Committee members in which the suggested action could be challenged and changed.

Grammatical changes were also considered at this stage but only where they did not change the meaning of an approved item. Following review of all items, the order of the checklist items was evaluated by WTG, PL, PB, and NH.

Standardized terminology

After the consensus meetings, NH updated and standardized the terminology according to the type of information requested in the item to ensure consistency between items, and this was approved by the Steering Committee. This standardization of terminology incorporated rules established for the use of terms common in reporting guidelines, as shown in S7 Text, such as the difference between using “state” or “describe.” All but 2 items (R5 and O1) contain a verb from S7 Text.

Results

Delphi panel demographics

The Delphi panel included a diverse group of panelists, representing a wide range of geographical areas and professions (Table 3). Of the 72 participants who indicated their willingness to participate in the Delphi panel, 58 (81%) completed Round 1 and were invited to Round 2. Fifty-four participants completed Round 2 and were invited to Round 3, which was completed by 51 participants.

thumbnail
Table 3. Self-identified demographics of the Delphi panelists, per voting round.

https://doi.org/10.1371/journal.pmed.1004326.t003

Delphi results

The updated preliminary draft checklist presented to the Delphi panel for voting contained 41 items. The changes in the number of checklist items over the Delphi voting rounds are illustrated in Fig 1. After Round 1, 7 new items were added, and 1 item was lost by combining with another item, resulting in 47 items being included in Round 2. Only items that were unstable (n = 4) or were modified sufficiently to be considered new (n = 6) were voted on in Round 3. After Round 2, 33 items achieved consensus, and a further 3 items achieved consensus after all 3 rounds of voting. Therefore, at the end of the Delphi process, consensus was reached on 36 items. The results of the Delphi process, showing the iteration of items and level of agreement at each round, are summarized in S8 Text.

thumbnail
Fig 1. A flow diagram to show the development of checklist items.

*Potential items from relevant information beyond the predefined data extraction form [19]. New item (T1) proposed at checklist review meeting.

https://doi.org/10.1371/journal.pmed.1004326.g001

Finalization by Steering Committee

One item rejected by the Delphi panel was restored to the checklist (M10, becoming item M5), and 3 highly approved (>90%) items were modified by combining with other items during the Steering Committee finalization workshops. S8 Text contains the iterations of the rounds of the Delphi voting demonstrating the changes made in each round and showing how items evolved.

Restored item (Delphi M10 > Final M5)

Delphi item M10 (patient and public involvement) failed to achieve stable consensus during the voting process (Round 1, 87.5%; Round 2, 73.1%; Round 3, 76.0%; see S8 Text). The comments from the panel led the Steering Committee to conclude that panelists had not reached agreement on reporting patient and public involvement due to the item being essential in some—but not all—consensus processes (“Depends on the topic of Delphi consensus, should be optional”; “For me this rests on the topic of the exercise”) and because of disagreements about preferred terminology (“The difference between lay and patient and public partners is potentially confusing”; “DO NOT change ‘participants’ to ‘partners’”). However, the Steering Committee identified many situations where the inclusion of patients would be considered essential. Priority-setting and core outcome identification are just 2 areas where patient participation in consensus exercises is becoming standard [3537]. Based on unanimous agreement (11/11), the Steering Committee decided to reinstate M10 as reporting item M5, while taking into account the most consistent comments regarding wording (notably, that “lay” should not be used).

Items with high level of agreement that were modified

Three original items, R3, R6, and R7, overlapped by all covering aspects of which data were reported from the Delphi voting rounds. During the checklist finalization workshops, the Steering Committee discussed these 3 items and combined them to create 2 final items, R3 (quantitative data) and R4 (qualitative data). In addition, the Steering Committee noted an overlap between original items M22 and R8 related to modifications made to items or topics during the consensus process (see S8 Text). These 2 items were combined to create the final item R5. Finally, M13 was revised to remove a conceptual overlap with M12 and to use clearer language.

Final checklist

The final ACCORD checklist comprised 35 items that were identified as essential to ensure clear and transparent reporting of consensus studies. The finalized ACCORD checklist is presented in Table 4 and is available to download and complete (S9 Text).

thumbnail
Table 4. The final ACCORD checklist for the reporting of consensus methods.

https://doi.org/10.1371/journal.pmed.1004326.t004

Discussion

The ACCORD checklist has been developed using a robust and systematic approach, with input from participants with a variety of areas of expertise, and it is now available for any health researcher to use to report studies that use consensus methods. The process of developing ACCORD itself used consensus methods, which are reported here according to the checklist developed.

Why ACCORD was needed

The need for optimal reporting of consensus methods has been documented for decades [19,24]. The absence of a reporting guideline that encompasses the range of consensus methods may contribute to poor reporting quality [5], and this prompted the development of the ACCORD checklist.

There are 2 EQUATOR-listed reporting guidelines that provide guidance for specific projects that typically include consensus exercises: AGREE-II has only 1 item, “Formulation of Recommendations,” relating to the method used to obtain consensus [27]. COS-STAR includes only 3 items around the definition of consensus and a “description of how the consensus process was undertaken” [28]. In addition, CREDES [26] is a method- and specialty-specific guideline aimed at supporting the conduct and reporting of Delphi studies in palliative care. None of these guidelines is suitable as a comprehensive and general tool for reporting any type of consensus exercise. ACCORD addresses the breadth of methods used to attain consensus (including the Delphi method) and should be complementary to AGREE-II where a clinical practice guideline also includes a formal consensus development process. Another reporting guideline currently under development, DELPHISTAR [25], is Delphi specific and covers medical and social sciences. ACCORD extends beyond Delphi methods and encompasses a wide range of consensus methods in various health-related fields.

Although familiarity with ACCORD is likely to be useful to ensure relevant elements are considered when designing a consensus study, it is a reporting guideline and not a mandate for study conduct. The methodological background to the items and published examples of what we consider to be good reporting will be discussed in the ACCORD Explanation and Elaboration document (manuscript in preparation).

Strengths and limitations

ACCORD was conducted through an open, collaborative process with a predefined, published protocol [29]. It started with a systematic review [19] using robust methods of searching, screening, and extraction, which led to the identification of common gaps in reporting consensus methods. Only 18 studies were eligible for inclusion in the systematic review, and data extraction generated 30 potential checklist items. An additional 26 items were identified that were not covered by the data extraction list. Following this thorough process, these 56 potential items were supplemented by a further 9 proposed by the Steering Committee, with an additional 7 proposed by Delphi panelists.

The ACCORD checklist involved input from participants with a wide range of expertise, including methodologists, patient advocates, healthcare professionals, journal editors, publication professionals, and representatives from the pharmaceutical industry and bodies such as NICE and the Scottish Intercollegiate Guidelines Network. With a few exceptions reported here, their recommendations were fully adopted and integrated into the final checklist. ACCORD was developed to assist everyone involved in consensus-based activities or research. It will assure participants that methods will be accurately reported; guide authors when writing up a publication; help journal editors and peer reviewers when assessing a manuscript for publication; and enhance trust in the recommendations made by consensus panels. Our hope is that ACCORD will ultimately benefit patients by improving the transparency and robustness of consensus studies in healthcare.

A limitation of the ACCORD initiative is that the panel was not as diverse as we hoped. ACCORD was a meta-research project drawing on work from many countries, but our view is that diversity of expertise and personal experience always strengthens consensus discussions. Our aim was to broaden the diversity of contributors to ACCORD by recruiting a panel more diverse than the Steering Committee in geography and experience to mitigate the perpetuation of and dilute any biases held by the Steering Committee. Although invitations were sent to potential panelists in South America, Asia, Africa, and Oceania, few responses were obtained, leading to limited participation from these continents and a panel that was largely drawn from Europe and North America. Similarly, the professional diversity of the ACCORD panel was not as broad as we hoped, with patient partners and policymakers relatively underrepresented compared with clinicians. Therefore, in the future, greater efforts should be made to recruit panelists with experience in consensus from a broader range of professions as well as other regions and countries with different cultures and health systems. For example, although some experience of clinical psychology exists in the Steering Committee, inclusion of more behavioral scientists with experience of the process of decision-making could be a helpful addition. Similarly, the inclusion of more policymakers would strengthen the representation of their perspective on consensus reporting to ensure it was relevant and reliable and, therefore, acceptable to be referenced and inform policy. Although these biases were not fully mitigated, future revisions or extensions to ACCORD will aim to improve in this regard.

Members of the ACCORD Steering Committee did not vote in the Delphi surveys. In our process, the virtual workshops held to finalize the ACCORD checklist did not include the Delphi panel. This might be seen as a limitation by some, especially those involved in reporting guidelines development, as a consensus meeting including some expert members of the Delphi panel is usually conducted according to the guidance issued by the EQUATOR Network [32]. However, our process held the Steering Committee and Delphi panel separate: the Steering Committee did not participate in the Delphi panel, and the Delphi panelists did not participate in the final consensus discussions. We suggest that this could in fact be seen as a strength of our process since, while the larger Delphi panel did not reach consensus on 1 particular item, discussion among the Steering Committee led to its inclusion in the final checklist without full approval of the Delphi panel (see results and commentary for item M5). If the panelists had been part of the final consensus meeting, this may have resulted in the omission from the final checklist of this item, which related to patient participation in consensus studies. However, the experience represented by the Steering Committee recognized the value of patient participation in consensus recommendations, the importance of which is reported in the literature [38], and voted to include this item.

Stability of agreement indicates when consensus is present among a group. There are several methods to assess for stability, but ACCORD adhered to a simple definition of achieving the a priori agreed threshold for agreement over a minimum of 2 voting rounds [39].

Another limitation that consensus and survey specialists may note is that the items in our Delphi survey were not presented to panelists in a random order. Since ACCORD was proposing content items for the sections of a scientific manuscript (title, introduction, methods, results, and discussion), we preferred to present items in these sections in the order that they usually appear to enhance comprehension and avoid confusion. This is something that may affect all reporting guidelines development. In fact, several panelists provided feedback on how to order the items.

The implementation of the ACCORD reporting guideline

Many reporting guidelines are published without initiatives to facilitate implementation. Only 15.7% of guidelines on the EQUATOR Network website mentioned an implementation plan [33]. An implementation study to inform an Explanation and Elaboration document has been completed and the results submitted for presentation at a conference. The full ACCORD implementation plan and supporting materials are being developed and will be available on the ACCORD website (https://www.ismpp.org/accord).

The future of ACCORD

Robust reporting is particularly important for studies using consensus methods given that so many methods exist and researchers frequently make modifications to “standard” methods. We anticipate that updates of the ACCORD checklist will be necessary, as technology and consensus methods continue to evolve.

Besides updates, ACCORD could have extensions developed in areas such as nonclinical biomedical studies, health economics, or health informatics and artificial intelligence, and even beyond healthcare, with input from appropriate experts. The Steering Committee welcomes feedback and interest from other researchers in these areas.

Conclusions

The ACCORD reporting guideline provides the scientific community with an important tool to improve the completeness and transparency of reporting of studies that use consensus methods. The ACCORD checklist supports authors in writing manuscripts with sufficient information to enable readers to understand the study’s methods, the study’s results, and the interpretation of those results so that they can draw their own conclusions about the robustness and credibility of the recommendations.

Supporting information

S3 Text. Information pack for Delphi panelist.

https://doi.org/10.1371/journal.pmed.1004326.s003

(DOCX)

S4 Text. Example of feedback provided to panelists.

https://doi.org/10.1371/journal.pmed.1004326.s004

(DOCX)

S5 Text. Feedback documents provided to Delphi panelists.

https://doi.org/10.1371/journal.pmed.1004326.s005

(PDF)

S6 Text. Recommended approaches to approved and rejected items used during the checklist finalization workshops.

https://doi.org/10.1371/journal.pmed.1004326.s006

(DOCX)

S7 Text. Criteria for the standardization of terms used to guide reporting in ACCORD.

https://doi.org/10.1371/journal.pmed.1004326.s007

(DOCX)

S8 Text. Summary of Delphi voting rounds.

https://doi.org/10.1371/journal.pmed.1004326.s008

(DOCX)

Acknowledgments

The authors would like to thank all the Delphi panelists for their vital contribution to the project, including Anirudha Agnihotry, DDS; Brian S. Alper, MD, MSPH, FAAFP, FAMIA; Julián Amorín-Montes, MD; Thierry Auperin, PhD; Slavka Baronikova; Franco Bazzoli; Marnie Brennan; Melissa Brouwers, PhD; Klara Brunnhuber; Teresa M. Chan; Martine Docking; Jenny Fanstone; Ivan D. Florez; Suzanne B. Gangi; Sean Grant; Susan Humphrey-Murto; Alexandra Frances Kavaney; Rachel E. Kettle, PhD; Samson G. Khachatryan; Karim Khan, MD, PhD; Margarita Lens, MSci; Elizabeth Loder, MD, MPH; Aubrey Malden; Lidwine B. Mokkink; Ronald Munatsi; Prof. Dr Marlen Niederberger; Mina Patel, PhD; William R. Phillips, MD, MPH; Kris Pierce; Sheuli Porkess; Weini Qiu; Linda Romagnano, PhD; Maurizio Scarpa, MD, PhD; Dan Shanahan; Paul Sinclair; Prof. Ripudaman Singh; Dr Curtis Sonny; Ms Ailsa Stein; Carey M. Suehs; Bob Stevens; Dr Chit Su Tinn; Prof. Vasiliy Vlassov, MD; Konstantin P. Vorobyov, MD. Project management support was provided by Mark Rolfe, Helen Bremner, Amie Hedges, and Mehraj Ahmed from Oxford PharmaGenesis. The authors would like to thank the support provided by ISMPP, in particular the input provided by the current President, Robert Matheis, at the outset of the project. Jan Schoones (Leiden University Medical Centre) assisted in the development of the search strategy. Laura Harrington, PhD, an employee of Ogilvy Health, provided medical writing support.

The authors would like to thank their respective employers for allowing them to contribute their time to ACCORD. Over the course of the project, Ogilvy Health, OPEN Health, Ipsen, Bristol Myers Squibb, AbbVie and CRUK, via a grant to PL, also funded their representatives’ attendance at meetings during which ACCORD updates were being presented. Oxford PharmaGenesis provided financial support to some Steering Committee members to attend face-to-face ACCORD meetings during which ACCORD updates were presented. Oxford PharmaGenesis also paid for the meeting room hire and catering associated with a 1-day ACCORD Steering Committee meeting held in Oxford, United Kingdom, in September 2022.

References

  1. 1. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):71–72. pmid:8555924; PubMed Central PMCID: PMC2349778.
  2. 2. Szajewska H. Evidence-based medicine and clinical research: both are needed, neither is perfect. Ann Nutr Metab. 2018;72 Suppl 3:13–23. Epub 20180409. pmid:29631266.
  3. 3. Greenhalgh T. Will COVID-19 be evidence-based medicine’s nemesis? PLoS Med. 2020;17(6):e1003266. Epub 20200630. pmid:32603323; PubMed Central PMCID: PMC7326185.
  4. 4. Murphy MK, Black NA, Lamping DL, McKee CM, Sanderson CF, Askham J, et al. Consensus development methods, and their use in clinical guideline development. Health Technol Assess. 1998;2(3):i-iv, 1–88. pmid:9561895.
  5. 5. Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, et al. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014;67(4):401–409. pmid:24581294.
  6. 6. Woudenberg F. An evaluation of Delphi. Technol Forecast Soc Change. 1991;40(2):131–150.
  7. 7. Delbecq AL, van de Ven AH, Gustafson DH. Group Techniques for Program Planning: a guide to nominal group and Delphi processes. Glenview, Illinois, USA: Scott, Foresman and Company; 1975.
  8. 8. Fitch K, Bernstein SJ, Aguilar MD, Burnand B, LaCalle JR, Lazaro P, et al. The RAND/UCLA appropriateness method user’s manual. Santa Monica, California, USA: RAND Corporation; 2001. [cited 2023 Jun 6]. Available from: https://www.rand.org/pubs/monograph_reports/MR1269.html.
  9. 9. van Melick N, van Cingel REH, Brooijmans F, Neeter C, van Tienen T, Hullegie W, et al. Evidence-based clinical practice update: practice guidelines for anterior cruciate ligament rehabilitation based on a systematic review and multidisciplinary consensus. Br J Sports Med. 2016;50(24):1506–1515. Epub 20160818. pmid:27539507.
  10. 10. Sadowski DC, Camilleri M, Chey WD, Leontiadis GI, Marshall JK, Shaffer EA, et al. Canadian Association of Gastroenterology Clinical Practice Guideline on the management of bile acid diarrhea. J Can Assoc Gastroenterol. 2020;3(1):e10–e27. Epub 20191206. pmid:32010878; PubMed Central PMCID: PMC6985689.
  11. 11. Zuberbier T, Abdul Latiff AH, Abuzakouk M, Aquilina S, Asero R, Baker D, et al. The international EAACI/GA2LEN/EuroGuiDerm/APAAACI guideline for the definition, classification, diagnosis, and management of urticaria. Allergy. 2022;77(3):734–766. Epub 20211020. pmid:34536239.
  12. 12. Clayton-Smith M, Narayanan H, Shelton C, Bates L, Brennan F, Deido B, et al. Greener Operations: a James Lind Alliance Priority Setting Partnership to define research priorities in environmentally sustainable perioperative practice through a structured consensus approach. BMJ Open. 2023;13(3):e066622. Epub 20230328. pmid:36977540; PubMed Central PMCID: PMC10069275.
  13. 13. Munblit D, Nicholson T, Akrami A, Apfelbacher C, Chen J, De Groote W, et al. A core outcome set for post-COVID-19 condition in adults for use in clinical practice and research: an international Delphi consensus study. Lancet Respir Med. 2022;10(7):715–724. Epub 20220614. pmid:35714658; PubMed Central PMCID: PMC9197249.
  14. 14. Rubino F, Puhl RM, Cummings DE, Eckel RH, Ryan DH, Mechanick JI, et al. Joint international consensus statement for ending stigma of obesity. Nat Med. 2020;26(4):485–497. Epub 20200304. pmid:32127716; PubMed Central PMCID: PMC7154011.
  15. 15. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. Epub 20210329. pmid:33782057; PubMed Central PMCID: PMC8005924.
  16. 16. Kurvers RHJM, Herzog SM, Hertwig R, Krause J, Carney PA, Bogart A, et al. Boosting medical diagnostics by pooling independent judgments. Proc Natl Acad Sci U S A. 2016;113(31):8777–8782. Epub 20160718. pmid:27432950; PubMed Central PMCID: PMC4978286.
  17. 17. Surowiecki J. The wisdom of crowds. New York, USA: Anchor; 2004.
  18. 18. Woolley AW, Chabris CF, Pentland A, Hashmi N, Malone TW. Evidence for a collective intelligence factor in the performance of human groups. Science. 2010;330(6004):686–688. Epub 20100930. pmid:20929725.
  19. 19. van Zuuren EJ, Logullo P, Price A, Fedorowicz Z, Hughes EL, Gattrell WT. Existing guidance on reporting of consensus methodology: a systematic review to inform ACCORD guideline development. BMJ Open. 2022;12(9):e065154. Epub 20220908. pmid:36201247; PubMed Central PMCID: PMC9462098.
  20. 20. Barnes C, Boutron I, Giraudeau B, Porcher R, Altman DG, Ravaud P. Impact of an online writing aid tool for writing a randomized trial report: the COBWEB (Consort-based WEB tool) randomized controlled trial. BMC Med. 2015;13:221. Epub 20150915. pmid:26370288; PubMed Central PMCID: PMC4570037.
  21. 21. Dechartres A, Trinquart L, Atal I, Moher D, Dickersin K, Boutron I, et al. Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study. BMJ. 2017;357:j2490. Epub 20170608. pmid:28596181.
  22. 22. Turner L, Shamseer L, Altman DG, Schulz KF, Moher D. Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Syst Rev. 2012;1:60. Epub 20121129. pmid:23194585; PubMed Central PMCID: PMC3564748.
  23. 23. Blazey P, Crossley KM, Ardern CL, van Middelkoop M, Scott A, Khan KM. It is time for consensus on ’consensus statements’. Br J Sports Med. 2022;56(6):306–307. Epub 20210923. pmid:34556467; PubMed Central PMCID: PMC8899487.
  24. 24. Gupta UG, Clarke RE. Theory and applications of the Delphi technique: a bibliography (1975–1994). Technol Forecast Social Change. 1996;53(2):185–211.
  25. 25. Spranger J, Homberg A, Sonnberger M, Niederberger M. Reporting guidelines for Delphi techniques in health sciences: a methodological review. Z Evid Fortbild Qual Gesundhwes. 2022;172:1–11. Epub 20220617. pmid:35718726.
  26. 26. Jünger S, Payne SA, Brine J, Radbruch L, Brearley SG. Guidance on Conducting and REporting DElphi Studies (CREDES) in palliative care: recommendations based on a methodological systematic review. Palliat Med. 2017;31(8):684–706. Epub 20170213. pmid:28190381.
  27. 27. Brouwers MC, Kerkvliet K, Spithoff K, AGREE Next Steps Consortium. The AGREE Reporting Checklist: a tool to improve reporting of clinical practice guidelines. BMJ. 2016;352:i1152. Epub 20160308. pmid:26957104; PubMed Central PMCID: PMC5118873.
  28. 28. Kirkham JJ, Gorst S, Altman DG, Blazeby JM, Clarke M, Devane D, et al. Core Outcome Set-STAndards for Reporting: The COS-STAR Statement. PLoS Med. 2016;13(10):e1002148. Epub 20161018. pmid:27755541; PubMed Central PMCID: PMC5068732.
  29. 29. Gattrell WT, Hungin AP, Price A, Winchester CC, Tovey D, Hughes EL, et al. ACCORD guideline for reporting consensus-based methods in biomedical research and clinical practice: a study protocol. Res Integr Peer Rev. 2022;7(1):3. Epub 20220607. pmid:35672782; PubMed Central PMCID: PMC9171734.
  30. 30. Open Science Framework. ACCORD registration with the Open Science Framework 2022 [cited 2023 Jun 6]. Available from: https://osf.io/2rzm9.
  31. 31. The EQUATOR Network. ACCORD registration at the EQUATOR Network. 2021 [cited 2023 Jun 6]. Available from: https://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#ACCORD.
  32. 32. Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217. Epub 20100216. pmid:20169112; PubMed Central PMCID: PMC2821895.
  33. 33. Schlussel MM, Sharp MK, de Beyer JA, Kirtley S, Logullo P, Dhiman P, et al. Reporting guidelines used varying methodology to develop recommendations. J Clin Epidemiol. 2023; 159:246–256. Epub 20230323. pmid:36965598.
  34. 34. The EQUATOR Network. EQUATOR Network Newsletter October 2021. 2021 [cited 2023 Jun 6]. Available from: https://mailchi.mp/e54a81276f98/the-equator-network-newsletter-october-2021.
  35. 35. Clavisi O, Bragge P, Tavender E, Turner T, Gruen RL. Effective stakeholder participation in setting research priorities using a Global Evidence Mapping approach. J Clin Epidemiol. 2013;66(5):496–502.e2. Epub 20120718. pmid:22819249.
  36. 36. Dijkstra HP, Mc Auliffe S, Ardern CL, Kemp JL, Mosler AB, Price A, et al. Infographic. Oxford consensus on primary cam morphology and femoroacetabular impingement syndrome—natural history of primary cam morphology to inform clinical practice and research priorities on conditions affecting the young person’s hip. Br J Sports Med. 2023;57(6):382–384. Epub 20230117. pmid:36650034; PubMed Central PMCID: PMC9985723.
  37. 37. Staniszewska S, Brett J, Simera I, Seers K, Mockford C, Goodlad S, et al. GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research. BMJ. 2017;358:j3453. Epub 20170802. pmid:28768629; PubMed Central PMCID: PMC5539518.
  38. 38. Dodd S, Gorst SL, Young A, Lucas SW, Williamson PR. Patient participation impacts outcome domain selection in core outcome sets for research: an updated systematic review. J Clin Epidemiol. 2023;158:127–133. Epub 20230411. pmid:37054902.
  39. 39. von der Gracht HA. Consensus measurement in Delphi studies: review and implications for furture quality assurance. Technol Forecast Social Change. 2012;79(8):1525–1536.