Mythbusting Monday is a series of blog posts where we invite different faculty members from the College of Education to discuss education myths and how or what we are doing to combat the myth, fix the myth, etc. We want our faculty to have a voice in busting these myths and we hope by making these issues personal we can promote and provoke conversations. We will reference the book 50 Myths and Lies That Threaten America’s Public Schools: The Real Crisis in Education by David Berliner, Gene V Glass and Associates, 2014.
Myth #9: Teachers are the most important influence in a child’s education
Contributor: Dr. Kathryn Brooks
Good teachers matter. Supportive schools matter. Every child deserves good teachers and schools. However, teachers and schools do not have a strong influence on student standardized test scores. Teacher and school evaluation systems in most states are based on, at least in part, how well their students perform on standardized tests. However, these systems are flawed because factors other than teacher and school level factors are the greatest influence over student performance on standardized tests.
The research unequivocally shows that there is no statistically significant relationship between student performance on standardized tests and school or teacher quality. Basing high stakes decisions about schools and teachers on bad data will lead to bad decisions. Thousands of research studies over the past 50 years have been consistent in showing that we cannot establish a link between student test scores and school/teacher quality. This body of research is some of the most robust research that we have in education and psychometrics. We could write thousands of pages about these studies, but the following statements capture the essence of this research:
- 50+ years of extensive research on the impact of teachers and schools on student achievement indicate that typically only 7-10% of variability in student performance on standardized tests is attributable to teacher and school level factors (Coleman, 1966, Heubert & Hauser, 1999; Rivkin, Hanushek, & Kain, 1998; Schochet & Chiang, 2010). Ninety percent or more of standardized test score performance is attributable to factors that are not related to schools and teachers. The National Research Council of the National Academy of Sciences considers value-added measures of teacher effectiveness “too unstable to be considered fair or reliable” (Heubert & Hauser, 1999).
- Using the growth model, based on error rates for teacher-level analyses, Schochet and Chiang (2010) found an error rate of 26% over three years of comparison data. This means that more than one in four teachers were erroneously identified as underperforming or overperforming when they were average performers. Schochet and Chiang (2010) also found that looking at just one year of teacher data has an error rate of 35%, and they estimated that school level error rates fall between 16% and 21%. This means that more than 1 in 6 schools is likely to be falsely identified as low or high growth schools when the school is actually of average growth. These error rates will be much higher for schools that do not reflect the average demographics for the state. While Indiana has not released the error rates for our growth model, our model was based on the growth models studied by Schochet and Chiang. The American Statistical Association (2014) calls the statistical underpinning of growth model systems unstable, even under ideal conditions, due to its large error rates.
The difficulty of connecting school/teacher quality to student standardized test scores relates to the myriad of factors that influence student performance on standardized test scores. Non-school variables such as:
- low birth-weight and non-genetic prenatal influences on children;
- inadequate medical, dental, and vision care, often a result of inadequate or no medical insurance;
- food insecurity;
- environmental pollutants;
- family relations and family stress; and
- neighborhood characteristics (Berliner, 2009, p. 1),
exert a much greater influence on student achievement than do school-related factors. Even Betebenner (2008), the developer of Indiana’s school accountability model, raised concerns about attributing a direct link between student growth and school performance:
As with student achievement for a school, it would be a mistake to assert that the school is solely responsible for the growth of its students… The median student growth percentile is descriptive and makes no assertion about the cause of student growth (p. 5).
Non-school related factors create too much statistical noise for even the most prominent statisticians in our country to determine how schools and teachers influence student standardized test scores.
American Statistical Association (2014). Executive Summary of the ASA Statement on Using Value-Added Models for Educational Assessment. Retrieved from https://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf
Baker, E. L., et al. (2010). Problems with the Use of Student Test Scores to Evaluate Teachers. Washington, DC: Economic Policy Institute. Retrieved from epi.3cdn.net/b9667271ee6c154195_t9m6iij8k.pdf
Betebenner, D. (2008). A primer on student growth percentiles. Dover, NH: National Center of the Improvement of Educational Assessment.
Coleman, J. (1966). Equality of educational opportunity. Washington, D.C.: U.S. Government Printing Office.
Franco, M. S., & Seidel, K. (2012). Evidence for the need to more closely examine school effects in value-added modeling and related accountability policies. Education and Urban Society, 44(1).
Gong, B., Perie, M., and Dunn, J. (2006). Using student longitudinal growth measures for school accountability under No Child Left Behind: An update to inform design decisions. Center for Assessment. Available online at: http://www.nciea.org/publications/GrowthModelUpdate_BGMAPJD07.pdf
Heubert, J.P., & Hauser, R.M. (1999). High stakes: Testing for tracking, promotion, and graduation. Washington, DC: National Academy Press.
Hout, M. and Elliott, S., eds. (2011). Incentives and Test-Based Accountability in Education. National Research Council of the National Academies of Science. http://books.nap.edu/openbook.php?record_id=12521
Jones, B., and R. Egley. 2004. Voices from the frontlines: Teachers’ perceptions of high-stakes testing. Education Policy Analysis Archives 12(39).
Jones, M. G., B. D. Jones, B. Hardin, L. Chapman, T. Yarbrough, and M. Davis. 1999. The impact of high-stakes testing on teachers and students in North Carolina. Phi Delta Kappan 81(3): 199-203.
Rivkin. S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic achievement. Econometrica, 73(2), 417-458.
Schochet, P. Z., & Chiang, H. S. (2010). Error rates in measuring teacher and school performance based on student test score gains (NCEE 2010-4004). Washington, DC: National Center for Educational Evaluation and Regional Assistance, Institute of Education Sciences, United States Department of Education.
Stevens, J., & Zvoch, K. (2006). Issues in the implementation of longitudinal growth models for student achievement. In R. W. Lissitz (Ed.) Longitudinal and value added models of student performance (pp. 170-209). Maple Grove, MN: JAM Press.