Interobserver Agreement Among Uveitis Experts on Uveitic Diagnoses: The Standardization of Uveitis Nomenclature Experience

Standardization of Uveitis Nomenclature Working Group

Research output: Contribution to journalArticle

5 Citations (Scopus)

Abstract

Purpose To evaluate the interobserver agreement among uveitis experts on the diagnosis of the specific uveitic disease. Design Interobserver agreement analysis. Methods Five committees, each comprised of 9 individuals and working in parallel, reviewed cases from a preliminary database of 25 uveitic diseases, collected by disease, and voted independently online whether the case was the disease in question or not. The agreement statistic, κ, was calculated for the 36 pairwise comparisons for each disease, and a mean κ was calculated for each disease. After the independent online voting, committee consensus conference calls, using nominal group techniques, reviewed all cases not achieving supermajority agreement (>75%) on the diagnosis in the online voting to attempt to arrive at a supermajority agreement. Results A total of 5766 cases for the 25 diseases were evaluated. The overall mean κ for the entire project was 0.39, with disease-specific variation ranging from 0.23 to 0.79. After the formalized consensus conference calls to address cases that did not achieve supermajority agreement in the online voting, supermajority agreement overall was reached on approximately 99% of cases, with disease-specific variation ranging from 96% to 100%. Conclusions Agreement among uveitis experts on diagnosis is moderate at best but can be improved by discussion among them. These data suggest the need for validated and widely used classification criteria in the field of uveitis.

Original languageEnglish (US)
Pages (from-to)19-24
Number of pages6
JournalAmerican journal of ophthalmology
Volume186
DOIs
StatePublished - Feb 1 2018

Fingerprint

Uveitis
Terminology
Politics
Databases

ASJC Scopus subject areas

  • Ophthalmology

Cite this

Interobserver Agreement Among Uveitis Experts on Uveitic Diagnoses : The Standardization of Uveitis Nomenclature Experience. / Standardization of Uveitis Nomenclature Working Group.

In: American journal of ophthalmology, Vol. 186, 01.02.2018, p. 19-24.

Research output: Contribution to journalArticle

@article{1c03db8cb2cd46c2b1e4eeb0d086c6d2,
title = "Interobserver Agreement Among Uveitis Experts on Uveitic Diagnoses: The Standardization of Uveitis Nomenclature Experience",
abstract = "Purpose To evaluate the interobserver agreement among uveitis experts on the diagnosis of the specific uveitic disease. Design Interobserver agreement analysis. Methods Five committees, each comprised of 9 individuals and working in parallel, reviewed cases from a preliminary database of 25 uveitic diseases, collected by disease, and voted independently online whether the case was the disease in question or not. The agreement statistic, κ, was calculated for the 36 pairwise comparisons for each disease, and a mean κ was calculated for each disease. After the independent online voting, committee consensus conference calls, using nominal group techniques, reviewed all cases not achieving supermajority agreement (>75{\%}) on the diagnosis in the online voting to attempt to arrive at a supermajority agreement. Results A total of 5766 cases for the 25 diseases were evaluated. The overall mean κ for the entire project was 0.39, with disease-specific variation ranging from 0.23 to 0.79. After the formalized consensus conference calls to address cases that did not achieve supermajority agreement in the online voting, supermajority agreement overall was reached on approximately 99{\%} of cases, with disease-specific variation ranging from 96{\%} to 100{\%}. Conclusions Agreement among uveitis experts on diagnosis is moderate at best but can be improved by discussion among them. These data suggest the need for validated and widely used classification criteria in the field of uveitis.",
author = "{Standardization of Uveitis Nomenclature Working Group} and Jabs, {Douglas A.} and Andrew Dick and Doucette, {John T.} and Amod Gupta and Susan Lightman and Peter McCluskey and Okada, {Annabelle A.} and Palestine, {Alan G.} and Rosenbaum, {James T.} and Saleem, {Sophia M.} and Jennifer Thorne and Brett Trusko and Jyotirmay Biswas and Shigeaki Ohno and Manabu Mochizuki and Chee, {Soon Phaik} and Tisha Prabriputaloong and Justine Smith and Richard Stawell and Lyndell Lim and Ehud Zamir and Dennis Wakefield and Talin Barisani-Asenbauer and Laure Caspers and Bahram Bodaghi and Phuc LeHoang and Antoine Brezin and Manfred Zierhut and Massimo Accorinti and Paola Pivetti-Pezzi and Piergiorgio Neri and Seerp Baarsma and Aniki Rothova and Matthias Becker and {de Smet}, Marc and Elizabeth Graham and John Forrester and Philip Murray and Radgonde Amer and Michal Kramer and Zohar Habot-Wilner and Leyla Atmaca and Jean Deschenes and Belair, {Marie Lyne} and Janet Davis and Anat Galor and Careen Lowder and Sunil Srivastava and Michael Zegans and Glenn Jaffe",
year = "2018",
month = "2",
day = "1",
doi = "10.1016/j.ajo.2017.10.028",
language = "English (US)",
volume = "186",
pages = "19--24",
journal = "American Journal of Ophthalmology",
issn = "0002-9394",
publisher = "Elsevier USA",

}

TY - JOUR

T1 - Interobserver Agreement Among Uveitis Experts on Uveitic Diagnoses

T2 - The Standardization of Uveitis Nomenclature Experience

AU - Standardization of Uveitis Nomenclature Working Group

AU - Jabs, Douglas A.

AU - Dick, Andrew

AU - Doucette, John T.

AU - Gupta, Amod

AU - Lightman, Susan

AU - McCluskey, Peter

AU - Okada, Annabelle A.

AU - Palestine, Alan G.

AU - Rosenbaum, James T.

AU - Saleem, Sophia M.

AU - Thorne, Jennifer

AU - Trusko, Brett

AU - Biswas, Jyotirmay

AU - Ohno, Shigeaki

AU - Mochizuki, Manabu

AU - Chee, Soon Phaik

AU - Prabriputaloong, Tisha

AU - Smith, Justine

AU - Stawell, Richard

AU - Lim, Lyndell

AU - Zamir, Ehud

AU - Wakefield, Dennis

AU - Barisani-Asenbauer, Talin

AU - Caspers, Laure

AU - Bodaghi, Bahram

AU - LeHoang, Phuc

AU - Brezin, Antoine

AU - Zierhut, Manfred

AU - Accorinti, Massimo

AU - Pivetti-Pezzi, Paola

AU - Neri, Piergiorgio

AU - Baarsma, Seerp

AU - Rothova, Aniki

AU - Becker, Matthias

AU - de Smet, Marc

AU - Graham, Elizabeth

AU - Forrester, John

AU - Murray, Philip

AU - Amer, Radgonde

AU - Kramer, Michal

AU - Habot-Wilner, Zohar

AU - Atmaca, Leyla

AU - Deschenes, Jean

AU - Belair, Marie Lyne

AU - Davis, Janet

AU - Galor, Anat

AU - Lowder, Careen

AU - Srivastava, Sunil

AU - Zegans, Michael

AU - Jaffe, Glenn

PY - 2018/2/1

Y1 - 2018/2/1

N2 - Purpose To evaluate the interobserver agreement among uveitis experts on the diagnosis of the specific uveitic disease. Design Interobserver agreement analysis. Methods Five committees, each comprised of 9 individuals and working in parallel, reviewed cases from a preliminary database of 25 uveitic diseases, collected by disease, and voted independently online whether the case was the disease in question or not. The agreement statistic, κ, was calculated for the 36 pairwise comparisons for each disease, and a mean κ was calculated for each disease. After the independent online voting, committee consensus conference calls, using nominal group techniques, reviewed all cases not achieving supermajority agreement (>75%) on the diagnosis in the online voting to attempt to arrive at a supermajority agreement. Results A total of 5766 cases for the 25 diseases were evaluated. The overall mean κ for the entire project was 0.39, with disease-specific variation ranging from 0.23 to 0.79. After the formalized consensus conference calls to address cases that did not achieve supermajority agreement in the online voting, supermajority agreement overall was reached on approximately 99% of cases, with disease-specific variation ranging from 96% to 100%. Conclusions Agreement among uveitis experts on diagnosis is moderate at best but can be improved by discussion among them. These data suggest the need for validated and widely used classification criteria in the field of uveitis.

AB - Purpose To evaluate the interobserver agreement among uveitis experts on the diagnosis of the specific uveitic disease. Design Interobserver agreement analysis. Methods Five committees, each comprised of 9 individuals and working in parallel, reviewed cases from a preliminary database of 25 uveitic diseases, collected by disease, and voted independently online whether the case was the disease in question or not. The agreement statistic, κ, was calculated for the 36 pairwise comparisons for each disease, and a mean κ was calculated for each disease. After the independent online voting, committee consensus conference calls, using nominal group techniques, reviewed all cases not achieving supermajority agreement (>75%) on the diagnosis in the online voting to attempt to arrive at a supermajority agreement. Results A total of 5766 cases for the 25 diseases were evaluated. The overall mean κ for the entire project was 0.39, with disease-specific variation ranging from 0.23 to 0.79. After the formalized consensus conference calls to address cases that did not achieve supermajority agreement in the online voting, supermajority agreement overall was reached on approximately 99% of cases, with disease-specific variation ranging from 96% to 100%. Conclusions Agreement among uveitis experts on diagnosis is moderate at best but can be improved by discussion among them. These data suggest the need for validated and widely used classification criteria in the field of uveitis.

UR - http://www.scopus.com/inward/record.url?scp=85037650570&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85037650570&partnerID=8YFLogxK

U2 - 10.1016/j.ajo.2017.10.028

DO - 10.1016/j.ajo.2017.10.028

M3 - Article

C2 - 29122577

AN - SCOPUS:85037650570

VL - 186

SP - 19

EP - 24

JO - American Journal of Ophthalmology

JF - American Journal of Ophthalmology

SN - 0002-9394

ER -