{"id":500,"date":"2015-04-27T14:10:29","date_gmt":"2015-04-27T18:10:29","guid":{"rendered":"http:\/\/blogs.butler.edu\/coe\/?p=500"},"modified":"2015-04-27T14:10:29","modified_gmt":"2015-04-27T18:10:29","slug":"mythbusting-monday-teacher-influence","status":"publish","type":"post","link":"https:\/\/blogs.butler.edu\/coe\/2015\/04\/27\/mythbusting-monday-teacher-influence\/","title":{"rendered":"Mythbusting Monday &#8211; Teacher influence"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\" size-medium wp-image-478 aligncenter\" src=\"http:\/\/blogs.butler.edu\/coe\/files\/2015\/03\/mythbusting-300x35.jpg\" alt=\"mythbusting\" width=\"300\" height=\"35\" srcset=\"https:\/\/blogs.butler.edu\/coe\/files\/2015\/03\/mythbusting-300x35.jpg 300w, https:\/\/blogs.butler.edu\/coe\/files\/2015\/03\/mythbusting.jpg 581w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/p>\n<p>Mythbusting Monday is a series of blog posts\u00a0where we invite\u00a0different faculty members from\u00a0the College of Education to discuss\u00a0education\u00a0myths and how or what we are doing to combat the myth, fix the myth, etc.\u00a0 We want our\u00a0faculty to have a voice in busting these myths and we hope by making these issues personal we can promote and provoke conversations. \u00a0We will reference the book\u00a0<em>50 Myths and Lies That Threaten America\u2019s Public Schools: The Real Crisis in Education\u00a0<\/em>by David Berliner, Gene V Glass and Associates, 2014.<\/p>\n<p><strong>Myth #9:<\/strong> \u00a0Teachers are the most important influence in a child\u2019s education<\/p>\n<p><strong>Contributor:<\/strong> Dr. Kathryn Brooks<\/p>\n<p>Good teachers matter.\u00a0 Supportive schools matter. Every child deserves good teachers and schools. However, teachers and schools do not have a strong influence on student standardized test scores.\u00a0 Teacher and school evaluation systems in most states are based on, at least in part, how well their students perform on standardized tests.\u00a0 However, these systems are flawed because factors other than teacher and school level factors are the greatest influence over student performance on standardized tests.<\/p>\n<p>The research unequivocally shows that there is no statistically significant relationship between student performance on standardized tests and school or teacher quality.\u00a0 Basing high stakes decisions about schools and teachers on bad data will lead to bad decisions.\u00a0 Thousands of research studies over the past 50 years have been consistent in showing that we cannot establish a link between student test scores and school\/teacher quality.\u00a0 This body of research is some of the most robust research that we have in education and psychometrics.\u00a0 We could write thousands of pages about these studies, but the following statements capture the essence of this research:<\/p>\n<ul>\n<li>50+ years of extensive research on the impact of teachers and schools on student achievement indicate that typically only 7-10% of variability in student performance on standardized tests is attributable to teacher and school level factors (Coleman, 1966, Heubert &amp; Hauser, 1999; Rivkin, Hanushek, &amp; Kain, 1998; Schochet &amp; Chiang, 2010).\u00a0 Ninety percent or more of standardized test score performance is attributable to factors that are not related to schools and teachers. The National Research Council of the National Academy of Sciences considers value-added measures of teacher effectiveness \u201ctoo unstable to be considered fair or reliable\u201d (Heubert &amp; Hauser, 1999).<\/li>\n<li>Using the growth model, based on error rates for teacher-level analyses, Schochet and Chiang (2010) found an error rate of 26% over three years of comparison data.\u00a0 This means that more than one in four teachers were erroneously identified as underperforming or overperforming when they were average performers.\u00a0 Schochet and Chiang (2010) also found that looking at just one year of teacher data has an error rate of 35%, and they estimated that school level error rates fall between 16% and 21%.\u00a0 This means that more than 1 in 6 schools is likely to be falsely identified as low or high growth schools when the school is actually of average growth.\u00a0 These error rates will be much higher for schools that do not reflect the average demographics for the state.\u00a0 While Indiana has not released the error rates for our growth model, our model was based on the growth models studied by Schochet and Chiang.\u00a0 The American Statistical Association (2014) calls the statistical underpinning of growth model systems unstable, even under ideal conditions, due to its large error rates.<\/li>\n<\/ul>\n<p>The difficulty of connecting school\/teacher quality to student standardized test scores relates to the myriad of factors that influence student performance on standardized test scores.\u00a0 Non-school variables such as:<\/p>\n<ul>\n<li>low birth-weight and non-genetic prenatal influences on children;<\/li>\n<li>inadequate medical, dental, and vision care, often a result of inadequate or no medical insurance;<\/li>\n<li>food insecurity;<\/li>\n<li>environmental pollutants;<\/li>\n<li>family relations and family stress; and<\/li>\n<li>neighborhood characteristics (Berliner, 2009, p. 1),<\/li>\n<\/ul>\n<p>exert a much greater influence on student achievement than do school-related factors.\u00a0 Even Betebenner (2008), the developer of Indiana\u2019s school accountability model, raised concerns about attributing a direct link between student growth and school performance:<\/p>\n<p>As with student achievement for a school, it would be a mistake to assert that the school is solely responsible for the growth of its students&#8230;\u00a0 The median student growth percentile is descriptive and makes no assertion about the cause of student growth (p. 5).<\/p>\n<p>Non-school related factors create too much statistical noise for even the most prominent statisticians in our country to determine how schools and teachers influence student standardized test scores.<\/p>\n<p><strong>References<\/strong><\/p>\n<p>American Statistical Association (2014). <em>Executive Summary of the ASA Statement on Using Value-Added Models for Educational Assessment<\/em>. Retrieved from <a href=\"https:\/\/www.amstat.org\/policy\/pdfs\/ASA_VAM_Statement.pdf\">https:\/\/www.amstat.org\/policy\/pdfs\/ASA_VAM_Statement.pdf<\/a><\/p>\n<p>Baker, E. L., et al. (2010). <em>Problems with the Use of Student Test Scores to Evaluate Teachers<\/em>. Washington, DC: Economic Policy Institute. Retrieved from epi.3cdn.net\/b9667271ee6c154195_t9m6iij8k.pdf<\/p>\n<p>Betebenner, D. (2008).\u00a0 A primer on student growth percentiles. Dover, NH: National Center of the Improvement of Educational Assessment.<\/p>\n<p>Coleman, J. (1966). Equality of educational opportunity. Washington, D.C.: U.S. Government Printing Office.<\/p>\n<p>Franco, M. S., &amp; Seidel, K. (2012). Evidence for the need to more closely examine school effects in value-added modeling and related accountability policies. <em>Education and Urban Society, 44<\/em>(1).<\/p>\n<p>Gong, B., Perie, M., and Dunn, J. (2006). <em>Using student longitudinal growth measures for school accountability under No Child Left Behind<\/em>: <em>An update to inform design decisions<\/em>. Center for Assessment. Available online at: <a href=\"http:\/\/www.nciea.org\/publications\/GrowthModelUpdate_BGMAPJD07.pdf\">http:\/\/www.nciea.org\/publications\/GrowthModelUpdate_BGMAPJD07.pdf<\/a><\/p>\n<p>Heubert, J.P., &amp; Hauser, R.M. (1999). High stakes: Testing for tracking, promotion, and graduation. Washington, DC: National Academy Press.<\/p>\n<p>Hout, M. and Elliott, S., eds. (2011). Incentives and Test-Based Accountability in Education. National Research Council of the National Academies of Science. <span style=\"text-decoration: underline\"><a href=\"http:\/\/books.nap.edu\/openbook.php?record_id=12521\">http:\/\/books.nap.edu\/openbook.php?record_id=12521<\/a> <\/span><\/p>\n<p>Jones, B., and R. Egley. 2004. Voices from the frontlines: Teachers\u2019 perceptions of high-stakes testing. <em>Education Policy Analysis Archives 12<\/em>(39).<\/p>\n<p>Jones, M. G., B. D. Jones, B. Hardin, L. Chapman, T. Yarbrough, and M. Davis. 1999. The impact of high-stakes testing on teachers and students in North Carolina. <em>Phi Delta Kappan 81<\/em>(3): 199-203.<\/p>\n<p>Rivkin. S. G., Hanushek, E. A., &amp; Kain, J. F. (2005). Teachers, schools, and academic achievement. <em>Econometrica, 73<\/em>(2), 417-458.<\/p>\n<p>Schochet, P. Z., &amp; Chiang, H. S. (2010). <em>Error rates in measuring teacher and school performance based on student test score gains <\/em>(NCEE 2010-4004). Washington, DC: National Center for Educational Evaluation and Regional Assistance, Institute of Education Sciences, United States Department of Education.<\/p>\n<p>Stevens, J., &amp; Zvoch, K. (2006). Issues in the implementation of longitudinal growth models for student achievement. In R. W. Lissitz (Ed.) <em>Longitudinal and value added models of student performance <\/em>(pp. 170-209). Maple Grove, MN: JAM Press.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Mythbusting Monday is a series of blog posts\u00a0where we invite\u00a0different faculty members from\u00a0the College of Education to discuss\u00a0education\u00a0myths and how or what we are doing to combat the myth, fix the myth, etc.\u00a0 We want our\u00a0faculty to have a voice&hellip; <\/p>\n","protected":false},"author":1546,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1,11974,273676,5007],"tags":[],"class_list":["post-500","post","type-post","status-publish","format-standard","hentry","category-uncategorized","category-news","category-mythbusting-monday","category-policy"],"_links":{"self":[{"href":"https:\/\/blogs.butler.edu\/coe\/wp-json\/wp\/v2\/posts\/500","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.butler.edu\/coe\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.butler.edu\/coe\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.butler.edu\/coe\/wp-json\/wp\/v2\/users\/1546"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.butler.edu\/coe\/wp-json\/wp\/v2\/comments?post=500"}],"version-history":[{"count":2,"href":"https:\/\/blogs.butler.edu\/coe\/wp-json\/wp\/v2\/posts\/500\/revisions"}],"predecessor-version":[{"id":502,"href":"https:\/\/blogs.butler.edu\/coe\/wp-json\/wp\/v2\/posts\/500\/revisions\/502"}],"wp:attachment":[{"href":"https:\/\/blogs.butler.edu\/coe\/wp-json\/wp\/v2\/media?parent=500"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.butler.edu\/coe\/wp-json\/wp\/v2\/categories?post=500"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.butler.edu\/coe\/wp-json\/wp\/v2\/tags?post=500"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}