Abstract
The problem of assessing the content validity (or relevance) of standardized achievement tests is considered within the framework of generalizability theory. Four illustrative designs are described that may be used to assess test‐item fit to a curriculum. For each design, appropriate variance components are identified for making relative and absolute item (or test) selection decisions. Special consideration is given to use of these procedures for determining the number of raters and/or schools needed in a content‐validation decisionmaking study. Application of these procedures is illustrated using data from an international assessment of mathematics achievement
Original language | English (US) |
---|---|
Pages (from-to) | 287-299 |
Number of pages | 13 |
Journal | Journal of Educational Measurement |
Volume | 25 |
Issue number | 4 |
DOIs | |
State | Published - 1988 |
ASJC Scopus subject areas
- Education
- Developmental and Educational Psychology
- Applied Psychology
- Psychology (miscellaneous)