A comparison of evaluation metrics for biomedical journals, articles, and websites in terms of sensitivity to topic

Lawrence D. Fu, Y. Aphinyanaphongs Yindalon, Lily Wang, Constantin F. Aliferis

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed's clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic-adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations.

Original languageEnglish (US)
Pages (from-to)587-594
Number of pages8
JournalJournal of Biomedical Informatics
Volume44
Issue number4
DOIs
StatePublished - Aug 1 2011
Externally publishedYes

Fingerprint

Learning systems
Websites
Information retrieval
Health
Journal Impact Factor
Information Storage and Retrieval
PubMed
Research Personnel
Machine Learning

Keywords

  • Bibliometrics
  • Information retrieval
  • Journal impact factor
  • Machine learning
  • PageRank
  • Topic-sensitivity

ASJC Scopus subject areas

  • Computer Science Applications
  • Health Informatics

Cite this

A comparison of evaluation metrics for biomedical journals, articles, and websites in terms of sensitivity to topic. / Fu, Lawrence D.; Aphinyanaphongs Yindalon, Y.; Wang, Lily; Aliferis, Constantin F.

In: Journal of Biomedical Informatics, Vol. 44, No. 4, 01.08.2011, p. 587-594.

Research output: Contribution to journalArticle

Fu, Lawrence D. ; Aphinyanaphongs Yindalon, Y. ; Wang, Lily ; Aliferis, Constantin F. / A comparison of evaluation metrics for biomedical journals, articles, and websites in terms of sensitivity to topic. In: Journal of Biomedical Informatics. 2011 ; Vol. 44, No. 4. pp. 587-594.
@article{743bd587ec7c495c9e94ab43c7a9fd8a,
title = "A comparison of evaluation metrics for biomedical journals, articles, and websites in terms of sensitivity to topic",
abstract = "Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed's clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic-adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations.",
keywords = "Bibliometrics, Information retrieval, Journal impact factor, Machine learning, PageRank, Topic-sensitivity",
author = "Fu, {Lawrence D.} and {Aphinyanaphongs Yindalon}, Y. and Lily Wang and Aliferis, {Constantin F.}",
year = "2011",
month = "8",
day = "1",
doi = "10.1016/j.jbi.2011.03.006",
language = "English (US)",
volume = "44",
pages = "587--594",
journal = "Journal of Biomedical Informatics",
issn = "1532-0464",
publisher = "Academic Press Inc.",
number = "4",

}

TY - JOUR

T1 - A comparison of evaluation metrics for biomedical journals, articles, and websites in terms of sensitivity to topic

AU - Fu, Lawrence D.

AU - Aphinyanaphongs Yindalon, Y.

AU - Wang, Lily

AU - Aliferis, Constantin F.

PY - 2011/8/1

Y1 - 2011/8/1

N2 - Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed's clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic-adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations.

AB - Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed's clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic-adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations.

KW - Bibliometrics

KW - Information retrieval

KW - Journal impact factor

KW - Machine learning

KW - PageRank

KW - Topic-sensitivity

UR - http://www.scopus.com/inward/record.url?scp=79960563989&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79960563989&partnerID=8YFLogxK

U2 - 10.1016/j.jbi.2011.03.006

DO - 10.1016/j.jbi.2011.03.006

M3 - Article

VL - 44

SP - 587

EP - 594

JO - Journal of Biomedical Informatics

JF - Journal of Biomedical Informatics

SN - 1532-0464

IS - 4

ER -