TY - JOUR
T1 - From Flawed Design to Misleading Information
T2 - The U.S. Department of Education's Early Intervention Child Outcomes Evaluation
AU - Rosenberg, Steven A.
AU - Elbaum, Batya
AU - Rosenberg, Cordelia Robinson
AU - Kellar-Guenther, Yvonne
AU - McManus, Beth M.
N1 - Funding Information:
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported in part by grants awarded to JFK Partners, University of Colorado School of Medicine from the Administration on Intellectual and Developmental Disabilities, University Center of Excellence in Developmental Disabilities Grant 90DD0699, and the Maternal Child Health Bureau, Leadership Education in Neurodevelopmental Disabilities (LEND) Grant T73MC11044. Dr. McManus was supported by a Comprehensive Opportunities in Rehabilitation Research Training (CORRT) K12 Award (K12 HD055931) through the National Institutes of Health.
Publisher Copyright:
© The Author(s) 2017.
PY - 2018/9/1
Y1 - 2018/9/1
N2 - It is a matter of concern when large, federally funded programs are evaluated using designs that produce misleading information. In this article, we discuss problems associated with an evaluation design that was adopted by the U.S. Department of Education, Office of Special Education Programs (OSEP) to document the performance of a major early intervention (EI) program, serving young children with developmental delays and disabilities. In particular, we focus on OSEP’s requirement that states use a single group pre–post comparison design to evaluate the impact of EI on child outcomes. We also provide a data-based illustration that shows this evaluation design cannot distinguish child progress that is due to EI services from changes associated with other factors, such as regression to the mean. We hope this work will support the adoption of evaluation designs that are more in line with accepted principles of program evaluation.
AB - It is a matter of concern when large, federally funded programs are evaluated using designs that produce misleading information. In this article, we discuss problems associated with an evaluation design that was adopted by the U.S. Department of Education, Office of Special Education Programs (OSEP) to document the performance of a major early intervention (EI) program, serving young children with developmental delays and disabilities. In particular, we focus on OSEP’s requirement that states use a single group pre–post comparison design to evaluate the impact of EI on child outcomes. We also provide a data-based illustration that shows this evaluation design cannot distinguish child progress that is due to EI services from changes associated with other factors, such as regression to the mean. We hope this work will support the adoption of evaluation designs that are more in line with accepted principles of program evaluation.
KW - Part C early intervention
KW - accountability
KW - child outcomes
KW - developmental delays
KW - evaluation practice
UR - http://www.scopus.com/inward/record.url?scp=85048028853&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85048028853&partnerID=8YFLogxK
U2 - 10.1177/1098214017732410
DO - 10.1177/1098214017732410
M3 - Article
AN - SCOPUS:85048028853
VL - 39
SP - 350
EP - 363
JO - American Journal of Evaluation
JF - American Journal of Evaluation
SN - 1098-2140
IS - 3
ER -