Interobserver agreement in clinical grading of vitreous haze using alternative grading scales

Dana M. Hornbeak, Abhishek Payal, Maxwell Pistilli, Jyotirmay Biswas, Sudha K. Ganesh, Vishali Gupta, Sivakumar R. Rathinam, Janet L Davis, John H. Kempen

Research output: Contribution to journalArticle

17 Citations (Scopus)

Abstract

Purpose To evaluate the reliability of clinical grading of vitreous haze using a new 9-step ordinal scale versus the existing 6-step ordinal scale. Design Evaluation of diagnostic test (interobserver agreement study). Participants A total of 119 consecutive patients (204 uveitic eyes) presenting for uveitis subspecialty care on the study day at 1 of 3 large uveitis centers. Methods Five pairs of uveitis specialists clinically graded vitreous haze in the same eyes, one after the other using the same equipment, using the 6- and 9-step scales. Main Outcome Measures Agreement in vitreous haze grade between each pair of specialists was evaluated by the κ statistic (exact agreement and agreement within 1 or 2 grades). Results The scales correlated well (Spearman's ρ = 0.84). Exact agreement was modest using both the 6-step and 9-step scales: average κ = 0.46 (range, 0.28-0.81) and κ = 0.40 (range, 0.15-0.63), respectively. Within 1-grade agreement was slightly more favorable for the scale with fewer steps, but values were excellent for both scales: κ = 0.75 (range, 0.66-0.96) and κ = 0.62 (range, 0.38-0.87), respectively. Within 2-grade agreement for the 9-step scale also was excellent (κ = 0.85; range, 0.79-0.92). Two-fold more cases were potentially clinical trial eligible on the basis of the 9-step than the 6-step scale (P<0.001). Conclusions Both scales are sufficiently reproducible using clinical grading for clinical and research use with the appropriate threshold (≥2- and ≥3-step differences for the 6- and 9-step scales, respectively). The results suggest that more eyes are likely to meet eligibility criteria for trials using the 9-step scale. The 9-step scale appears to have higher reproducibility with Reading Center grading than clinical grading, suggesting that Reading Center grading may be preferable for clinical trials.

Original languageEnglish
Pages (from-to)1643-1648
Number of pages6
JournalOphthalmology
Volume121
Issue number8
DOIs
StatePublished - Jan 1 2014

Fingerprint

Uveitis
Reading
Clinical Trials
Routine Diagnostic Tests
Outcome Assessment (Health Care)
Equipment and Supplies
Research

Keywords

  • Abbreviation and Acronym
  • Standardization of Uveitis Nomenclature
  • SUN

ASJC Scopus subject areas

  • Ophthalmology
  • Medicine(all)

Cite this

Hornbeak, D. M., Payal, A., Pistilli, M., Biswas, J., Ganesh, S. K., Gupta, V., ... Kempen, J. H. (2014). Interobserver agreement in clinical grading of vitreous haze using alternative grading scales. Ophthalmology, 121(8), 1643-1648. https://doi.org/10.1016/j.ophtha.2014.02.018

Interobserver agreement in clinical grading of vitreous haze using alternative grading scales. / Hornbeak, Dana M.; Payal, Abhishek; Pistilli, Maxwell; Biswas, Jyotirmay; Ganesh, Sudha K.; Gupta, Vishali; Rathinam, Sivakumar R.; Davis, Janet L; Kempen, John H.

In: Ophthalmology, Vol. 121, No. 8, 01.01.2014, p. 1643-1648.

Research output: Contribution to journalArticle

Hornbeak, DM, Payal, A, Pistilli, M, Biswas, J, Ganesh, SK, Gupta, V, Rathinam, SR, Davis, JL & Kempen, JH 2014, 'Interobserver agreement in clinical grading of vitreous haze using alternative grading scales', Ophthalmology, vol. 121, no. 8, pp. 1643-1648. https://doi.org/10.1016/j.ophtha.2014.02.018
Hornbeak DM, Payal A, Pistilli M, Biswas J, Ganesh SK, Gupta V et al. Interobserver agreement in clinical grading of vitreous haze using alternative grading scales. Ophthalmology. 2014 Jan 1;121(8):1643-1648. https://doi.org/10.1016/j.ophtha.2014.02.018
Hornbeak, Dana M. ; Payal, Abhishek ; Pistilli, Maxwell ; Biswas, Jyotirmay ; Ganesh, Sudha K. ; Gupta, Vishali ; Rathinam, Sivakumar R. ; Davis, Janet L ; Kempen, John H. / Interobserver agreement in clinical grading of vitreous haze using alternative grading scales. In: Ophthalmology. 2014 ; Vol. 121, No. 8. pp. 1643-1648.
@article{7242a1a24e2b47208b71d403d9b15672,
title = "Interobserver agreement in clinical grading of vitreous haze using alternative grading scales",
abstract = "Purpose To evaluate the reliability of clinical grading of vitreous haze using a new 9-step ordinal scale versus the existing 6-step ordinal scale. Design Evaluation of diagnostic test (interobserver agreement study). Participants A total of 119 consecutive patients (204 uveitic eyes) presenting for uveitis subspecialty care on the study day at 1 of 3 large uveitis centers. Methods Five pairs of uveitis specialists clinically graded vitreous haze in the same eyes, one after the other using the same equipment, using the 6- and 9-step scales. Main Outcome Measures Agreement in vitreous haze grade between each pair of specialists was evaluated by the κ statistic (exact agreement and agreement within 1 or 2 grades). Results The scales correlated well (Spearman's ρ = 0.84). Exact agreement was modest using both the 6-step and 9-step scales: average κ = 0.46 (range, 0.28-0.81) and κ = 0.40 (range, 0.15-0.63), respectively. Within 1-grade agreement was slightly more favorable for the scale with fewer steps, but values were excellent for both scales: κ = 0.75 (range, 0.66-0.96) and κ = 0.62 (range, 0.38-0.87), respectively. Within 2-grade agreement for the 9-step scale also was excellent (κ = 0.85; range, 0.79-0.92). Two-fold more cases were potentially clinical trial eligible on the basis of the 9-step than the 6-step scale (P<0.001). Conclusions Both scales are sufficiently reproducible using clinical grading for clinical and research use with the appropriate threshold (≥2- and ≥3-step differences for the 6- and 9-step scales, respectively). The results suggest that more eyes are likely to meet eligibility criteria for trials using the 9-step scale. The 9-step scale appears to have higher reproducibility with Reading Center grading than clinical grading, suggesting that Reading Center grading may be preferable for clinical trials.",
keywords = "Abbreviation and Acronym, Standardization of Uveitis Nomenclature, SUN",
author = "Hornbeak, {Dana M.} and Abhishek Payal and Maxwell Pistilli and Jyotirmay Biswas and Ganesh, {Sudha K.} and Vishali Gupta and Rathinam, {Sivakumar R.} and Davis, {Janet L} and Kempen, {John H.}",
year = "2014",
month = "1",
day = "1",
doi = "10.1016/j.ophtha.2014.02.018",
language = "English",
volume = "121",
pages = "1643--1648",
journal = "Ophthalmology",
issn = "0161-6420",
publisher = "Elsevier Inc.",
number = "8",

}

TY - JOUR

T1 - Interobserver agreement in clinical grading of vitreous haze using alternative grading scales

AU - Hornbeak, Dana M.

AU - Payal, Abhishek

AU - Pistilli, Maxwell

AU - Biswas, Jyotirmay

AU - Ganesh, Sudha K.

AU - Gupta, Vishali

AU - Rathinam, Sivakumar R.

AU - Davis, Janet L

AU - Kempen, John H.

PY - 2014/1/1

Y1 - 2014/1/1

N2 - Purpose To evaluate the reliability of clinical grading of vitreous haze using a new 9-step ordinal scale versus the existing 6-step ordinal scale. Design Evaluation of diagnostic test (interobserver agreement study). Participants A total of 119 consecutive patients (204 uveitic eyes) presenting for uveitis subspecialty care on the study day at 1 of 3 large uveitis centers. Methods Five pairs of uveitis specialists clinically graded vitreous haze in the same eyes, one after the other using the same equipment, using the 6- and 9-step scales. Main Outcome Measures Agreement in vitreous haze grade between each pair of specialists was evaluated by the κ statistic (exact agreement and agreement within 1 or 2 grades). Results The scales correlated well (Spearman's ρ = 0.84). Exact agreement was modest using both the 6-step and 9-step scales: average κ = 0.46 (range, 0.28-0.81) and κ = 0.40 (range, 0.15-0.63), respectively. Within 1-grade agreement was slightly more favorable for the scale with fewer steps, but values were excellent for both scales: κ = 0.75 (range, 0.66-0.96) and κ = 0.62 (range, 0.38-0.87), respectively. Within 2-grade agreement for the 9-step scale also was excellent (κ = 0.85; range, 0.79-0.92). Two-fold more cases were potentially clinical trial eligible on the basis of the 9-step than the 6-step scale (P<0.001). Conclusions Both scales are sufficiently reproducible using clinical grading for clinical and research use with the appropriate threshold (≥2- and ≥3-step differences for the 6- and 9-step scales, respectively). The results suggest that more eyes are likely to meet eligibility criteria for trials using the 9-step scale. The 9-step scale appears to have higher reproducibility with Reading Center grading than clinical grading, suggesting that Reading Center grading may be preferable for clinical trials.

AB - Purpose To evaluate the reliability of clinical grading of vitreous haze using a new 9-step ordinal scale versus the existing 6-step ordinal scale. Design Evaluation of diagnostic test (interobserver agreement study). Participants A total of 119 consecutive patients (204 uveitic eyes) presenting for uveitis subspecialty care on the study day at 1 of 3 large uveitis centers. Methods Five pairs of uveitis specialists clinically graded vitreous haze in the same eyes, one after the other using the same equipment, using the 6- and 9-step scales. Main Outcome Measures Agreement in vitreous haze grade between each pair of specialists was evaluated by the κ statistic (exact agreement and agreement within 1 or 2 grades). Results The scales correlated well (Spearman's ρ = 0.84). Exact agreement was modest using both the 6-step and 9-step scales: average κ = 0.46 (range, 0.28-0.81) and κ = 0.40 (range, 0.15-0.63), respectively. Within 1-grade agreement was slightly more favorable for the scale with fewer steps, but values were excellent for both scales: κ = 0.75 (range, 0.66-0.96) and κ = 0.62 (range, 0.38-0.87), respectively. Within 2-grade agreement for the 9-step scale also was excellent (κ = 0.85; range, 0.79-0.92). Two-fold more cases were potentially clinical trial eligible on the basis of the 9-step than the 6-step scale (P<0.001). Conclusions Both scales are sufficiently reproducible using clinical grading for clinical and research use with the appropriate threshold (≥2- and ≥3-step differences for the 6- and 9-step scales, respectively). The results suggest that more eyes are likely to meet eligibility criteria for trials using the 9-step scale. The 9-step scale appears to have higher reproducibility with Reading Center grading than clinical grading, suggesting that Reading Center grading may be preferable for clinical trials.

KW - Abbreviation and Acronym

KW - Standardization of Uveitis Nomenclature

KW - SUN

UR - http://www.scopus.com/inward/record.url?scp=84905506024&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84905506024&partnerID=8YFLogxK

U2 - 10.1016/j.ophtha.2014.02.018

DO - 10.1016/j.ophtha.2014.02.018

M3 - Article

C2 - 24697913

AN - SCOPUS:84905506024

VL - 121

SP - 1643

EP - 1648

JO - Ophthalmology

JF - Ophthalmology

SN - 0161-6420

IS - 8

ER -