Reliability of a structured method of selecting abstracts for a plastic surgical scientific meeting

L. P E Van der Steen, J. Joris Hage, Moshe Kon, Riccardo Mazzola

Research output: Contribution to journalArticle

14 Citations (Scopus)

Abstract

There is no generally accepted method for assessing abstracts that are submitted for a medical scientific meeting. This article describes the development and prospective evaluation of such a method applied to the 220 abstracts submitted for the 2000 Annual Meeting of the European Association of Plastic Surgeons. Structured abstracts were evaluated in three categories: aesthetic surgery, basic research, and clinical study. Each anonymous abstract was assessed separately by 10 reputable European plastic surgeons. These reviewers used a structured rating questionnaire which resulted in a score given by each reviewer to each abstract between -6 and +6. The scores of all 10 reviewers were added for each abstract, and the papers were accepted in each of the three categories on the basis of this abridged score. To evaluate the reliability of this structured method of selection, the interrater agreement among the reviewers was tested by means of kappa analysis and the Cronbach alpha coefficient. The kappa values for agreement among reviewers regarding acceptability of abstracts were low, but the alpha coefficient indicated an acceptable degree of reliability of the average reviewers' ratings for all categories. Using a structured questionnaire can be helpful in the objective assessment of abstracts for a scientific meeting and may facilitate comparison of abstracts. Meritocratic dichotomy of abstracts by the reviewers is advocated to further improve reliability of the rating. Even though reliability generally increases with the number of reviewers, the annual increase of submitted abstracts may necessitate a decrease in the number of reviewers for each abstract.

Original languageEnglish
Pages (from-to)2215-2222
Number of pages8
JournalPlastic and Reconstructive Surgery
Volume111
Issue number7
DOIs
Publication statusPublished - Jun 2003

Fingerprint

Plastic Surgery
Research
Surveys and Questionnaires
Surgeons
Clinical Studies

ASJC Scopus subject areas

  • Surgery

Cite this

Reliability of a structured method of selecting abstracts for a plastic surgical scientific meeting. / Van der Steen, L. P E; Hage, J. Joris; Kon, Moshe; Mazzola, Riccardo.

In: Plastic and Reconstructive Surgery, Vol. 111, No. 7, 06.2003, p. 2215-2222.

Research output: Contribution to journalArticle

Van der Steen, L. P E ; Hage, J. Joris ; Kon, Moshe ; Mazzola, Riccardo. / Reliability of a structured method of selecting abstracts for a plastic surgical scientific meeting. In: Plastic and Reconstructive Surgery. 2003 ; Vol. 111, No. 7. pp. 2215-2222.
@article{80f5c75efa9c40b3aecf02b5caee5b6c,
title = "Reliability of a structured method of selecting abstracts for a plastic surgical scientific meeting",
abstract = "There is no generally accepted method for assessing abstracts that are submitted for a medical scientific meeting. This article describes the development and prospective evaluation of such a method applied to the 220 abstracts submitted for the 2000 Annual Meeting of the European Association of Plastic Surgeons. Structured abstracts were evaluated in three categories: aesthetic surgery, basic research, and clinical study. Each anonymous abstract was assessed separately by 10 reputable European plastic surgeons. These reviewers used a structured rating questionnaire which resulted in a score given by each reviewer to each abstract between -6 and +6. The scores of all 10 reviewers were added for each abstract, and the papers were accepted in each of the three categories on the basis of this abridged score. To evaluate the reliability of this structured method of selection, the interrater agreement among the reviewers was tested by means of kappa analysis and the Cronbach alpha coefficient. The kappa values for agreement among reviewers regarding acceptability of abstracts were low, but the alpha coefficient indicated an acceptable degree of reliability of the average reviewers' ratings for all categories. Using a structured questionnaire can be helpful in the objective assessment of abstracts for a scientific meeting and may facilitate comparison of abstracts. Meritocratic dichotomy of abstracts by the reviewers is advocated to further improve reliability of the rating. Even though reliability generally increases with the number of reviewers, the annual increase of submitted abstracts may necessitate a decrease in the number of reviewers for each abstract.",
author = "{Van der Steen}, {L. P E} and Hage, {J. Joris} and Moshe Kon and Riccardo Mazzola",
year = "2003",
month = "6",
doi = "10.1097/01.PRS.0000061092.88629.82",
language = "English",
volume = "111",
pages = "2215--2222",
journal = "Plastic and Reconstructive Surgery",
issn = "0032-1052",
publisher = "Lippincott Williams and Wilkins",
number = "7",

}

TY - JOUR

T1 - Reliability of a structured method of selecting abstracts for a plastic surgical scientific meeting

AU - Van der Steen, L. P E

AU - Hage, J. Joris

AU - Kon, Moshe

AU - Mazzola, Riccardo

PY - 2003/6

Y1 - 2003/6

N2 - There is no generally accepted method for assessing abstracts that are submitted for a medical scientific meeting. This article describes the development and prospective evaluation of such a method applied to the 220 abstracts submitted for the 2000 Annual Meeting of the European Association of Plastic Surgeons. Structured abstracts were evaluated in three categories: aesthetic surgery, basic research, and clinical study. Each anonymous abstract was assessed separately by 10 reputable European plastic surgeons. These reviewers used a structured rating questionnaire which resulted in a score given by each reviewer to each abstract between -6 and +6. The scores of all 10 reviewers were added for each abstract, and the papers were accepted in each of the three categories on the basis of this abridged score. To evaluate the reliability of this structured method of selection, the interrater agreement among the reviewers was tested by means of kappa analysis and the Cronbach alpha coefficient. The kappa values for agreement among reviewers regarding acceptability of abstracts were low, but the alpha coefficient indicated an acceptable degree of reliability of the average reviewers' ratings for all categories. Using a structured questionnaire can be helpful in the objective assessment of abstracts for a scientific meeting and may facilitate comparison of abstracts. Meritocratic dichotomy of abstracts by the reviewers is advocated to further improve reliability of the rating. Even though reliability generally increases with the number of reviewers, the annual increase of submitted abstracts may necessitate a decrease in the number of reviewers for each abstract.

AB - There is no generally accepted method for assessing abstracts that are submitted for a medical scientific meeting. This article describes the development and prospective evaluation of such a method applied to the 220 abstracts submitted for the 2000 Annual Meeting of the European Association of Plastic Surgeons. Structured abstracts were evaluated in three categories: aesthetic surgery, basic research, and clinical study. Each anonymous abstract was assessed separately by 10 reputable European plastic surgeons. These reviewers used a structured rating questionnaire which resulted in a score given by each reviewer to each abstract between -6 and +6. The scores of all 10 reviewers were added for each abstract, and the papers were accepted in each of the three categories on the basis of this abridged score. To evaluate the reliability of this structured method of selection, the interrater agreement among the reviewers was tested by means of kappa analysis and the Cronbach alpha coefficient. The kappa values for agreement among reviewers regarding acceptability of abstracts were low, but the alpha coefficient indicated an acceptable degree of reliability of the average reviewers' ratings for all categories. Using a structured questionnaire can be helpful in the objective assessment of abstracts for a scientific meeting and may facilitate comparison of abstracts. Meritocratic dichotomy of abstracts by the reviewers is advocated to further improve reliability of the rating. Even though reliability generally increases with the number of reviewers, the annual increase of submitted abstracts may necessitate a decrease in the number of reviewers for each abstract.

UR - http://www.scopus.com/inward/record.url?scp=0037643666&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0037643666&partnerID=8YFLogxK

U2 - 10.1097/01.PRS.0000061092.88629.82

DO - 10.1097/01.PRS.0000061092.88629.82

M3 - Article

C2 - 12794462

AN - SCOPUS:0037643666

VL - 111

SP - 2215

EP - 2222

JO - Plastic and Reconstructive Surgery

JF - Plastic and Reconstructive Surgery

SN - 0032-1052

IS - 7

ER -