Abstract
Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed's clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic-adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations.
Original language | English (US) |
---|---|
Pages (from-to) | 587-594 |
Number of pages | 8 |
Journal | Journal of biomedical informatics |
Volume | 44 |
Issue number | 4 |
DOIs | |
State | Published - Aug 2011 |
Externally published | Yes |
Keywords
- Bibliometrics
- Information retrieval
- Journal impact factor
- Machine learning
- PageRank
- Topic-sensitivity
ASJC Scopus subject areas
- Computer Science Applications
- Health Informatics