Carnegie Panel on Assessing Teaching to Improve Learning: Value-Added Methods and Applications
The Carnegie Foundation has brought together a distinguished group of researchers to translate cutting-edge research on value-added into useable information for the design and administration of teacher evaluation systems. Daniel Goldhaber, Doug Harris, Susanna Loeb, Daniel McCaffrey, and Steve Raudenbush have been selected to join the Carnegie Panel. The Foundation has purposely chosen for this panel individuals who have expertise in statistics and economics, who are impartial about particular value-added modeling strategies, and whose previous research, taken together, represents a range of views. These technical experts engaged with a panel of K-12 leaders to ensure that their work is relevant and accessible. The Carnegie Panel has co-developed a set of entries that explore issues critical to the use of value-added in teacher evaluation systems.
Technical Panel
Dan Goldhaber is the Director of the Center for Education Data & Research and a Professor in Interdisciplinary Arts and Sciences at the University of Washington Bothell. He is also the co-editor of Education Finance and Policy, and a member of the Washington State Advisory Committee to the U.S. Commission on Civil Rights. Dan previously served as an elected member of the Alexandria City School Board from 1997-2002, and as an Associate Editor of Economics of Education Review. Dan’s work focuses on issues of educational productivity and reform at the K-12 level, the broad array of human capital policies that influence the composition, distribution, and quality of teachers in the workforce, and connections between students’ K-12 experiences and postsecondary outcomes. Topics of published work in this area include studies of the stability of value-added measures of teachers, the effects of teacher qualifications and quality on student achievement, and the impact of teacher pay structure and licensure on the teacher labor market. Previous work has covered topics such as the relative efficiency of public and private schools, and the effects of accountability systems and market competition on K-12 schooling. Dan’s research has been regularly published in leading peer-reviewed economic and education journals such as: American Economic Review, Review of Economics and Statistics, Journal of Human Resources, Journal of Policy and Management, Journal of Urban Economics, Economics of Education Review, Education Finance and Policy, Industrial and Labor Relations Review, and Educational Evaluation and Policy Analysis. The findings from these articles have been covered in more widely accessible media outlets such as National Public Radio, the New York Times, the Washington Post, USA Today, and Education Week. Dr. Goldhaber holds degrees from the University of Vermont (B.A., Economics) and Cornell University (M.S. and Ph.D., Labor Economics).
KNOWLEDGE BRIEFS
- What Do We Know About the Tradeoffs Associated with Teacher Misclassification in High Stakes Personnel Decisions?
- Do Different Value-Added Models Tell Us the Same Things?
- What Do Value-Added Measures of Teacher Preparation Programs Tell Us?
For more from this author, see:
- Goldhaber, Dan, Joe Walch, and Brian Gabele. Center for Education and Research, ” Does the Model Matter? Exploring the Relationship Between Different Achievement-based Teacher Assessments.” Last modified 2012.
- Goldhaber, Dan, and Michael Hansen. “Is it Just a Bad Class? Assessing the Long-term Stability of Estimated Teacher Performance.” Economica. (Forthcoming).
- Goldhaber Dan and Michael Hansen. “Using Performance on the Job to Inform Teacher Tenure Decisions.” American Economic Review. 100. no.2. (2010): 250-255.
Douglas N. Harris is Associate Professor of Economics and University Endowed Chair in Public Education at Tulane University. His research explores how students’ educational outcomes are influenced by school choice, standards, teacher evaluation, test-based accountability, college financial aid, and college access programs. A former school board member, his research marries theory and rigorous research with the practical realities of schooling with publications ranging from the general interest journal Science to the Journal of Public Economics, Journal of Policy Analysis and Management, and others. His recent book, Value-Added Measures in Education (Harvard Education Press, 2011) was nominated for the national Grawemeyer Award. Washington Monthly magazine has used his research on college performance measures in its college ratings and David Brooks has cited related work in his New York Times column. He has advised eight state departments of education, elected officials at all levels of government, and groups such as the National Academy of Sciences, National Council of State Legislatures, National Governors Association, and National School Boards Association. His work is frequently cited in the national media, including CNN, Education Week, The New York Times, and The Washington Post.
KNOWLEDGE BRIEFS
- How Might We Use Multiple Measures for Teacher Accountability?
- Does Value-Added Work Better in Elementary Than in Secondary Grades?
- How Do Value-Added Indicators Compare to Other Measures of Teacher Effectiveness?
For more from this author, see:
- Harris, Douglas N. Value-Added Measures in Education: What Every Educator Needs to Know. Boston: Harvard Education Press, 2011.
- Harris, Douglas N. “Value-Added Measures: A New Approach.” The Huffington Post, sec. Education, September 18, 2012.
- Harris, Douglas, Tim Sass, and Anastasia Semykina. The Urban Institute, “Value-Added Models and the Measurement of Teacher Productivity.” Last modified 2011.
Susanna Loeb is the Barnett Family Professor of Education at Stanford University, Faculty Director of the Center for Education Policy Analysis, and a Co-Director of Policy Analysis for California Education (PACE). She specializes in the economics of education and the relationship between schools and federal, state and local policies. Her research addresses teacher policy, looking specifically at how teachers’ preferences affect the distribution of teaching quality across schools, how pre-service coursework requirements affect the quality of teacher candidates, and how reforms affect teachers’ career decisions. She also studies school leadership and school finance, for example looking at how the structure of state finance systems affects the level and distribution of resources across schools. Susanna is a senior fellow at the Stanford Institute for Economic Policy Research, a faculty research fellow at the National Bureau of Economic Research, a member of the Policy Council of the Association for Policy Analysis and Management, and Co-Editor of Educational Evaluation and Policy Analysis.
KNOWLEDGE BRIEFS
- How Can Value-Added Measures Be Used For Teacher Improvement?
- What Do We Know About the Use of Value-Added Measures for Principal Evaluation?
- What Do We Know About the Tradeoffs Associated with Teacher Misclassification in High Stakes Personnel Decisions?
- How Stable are Value-Added Estimates across Years, Subjects, and Student Groups?
For more from this author, see:
- Loeb, Susanna, Tara Beteille, and Demetra Kalogrides. “Effective schools: Teacher hiring, assignment, development, and retention.” Education Finance and Policy. 7. no. 3 (2012): 269-304.
- Lankford, Hamilton, Susanna Loeb, Donald Boyd, and James Wyckoff. “Teacher layoffs: An empirical illustration of seniority v. measures of effectiveness.” Education Finance and Policy, 6 no. 3 (2011):439–454.
- Boyd, Donald, Hamilton Lankford, Susanna Loeb, Matthew Ronfeldt, and James Wyckoff. “The role of teacher quality in retention and hiring: Using applications-to-transfer to uncover preferences of teachers and schools.” Journal of Policy Analysis and Management, 30. no.1, (2011): 88-110.
- Boyd, Donald, Pamela Grossman, Hamilton Lankford, Susanna Loeb, and James Wyckoff. “Teacher preparation and student achievement.” Education Evaluation and Policy Analysis, 31. no.4, (2009) 416-440.
Daniel F. McCaffrey is a Principal Research Scientist at Educational Testing Service. Previously, he was a Senior Statistician and PNC Chair in Policy Analysis at the RAND Corporation. He is a fellow of the American Statistical Association and is nationally recognized for his work on value-added modeling for estimating teacher performance. McCaffrey oversaw RAND’s efforts as part of the Gates Foundation’s Measures of Effective Teaching study to develop and validate sophisticated metrics to assess and improve teacher performance. He is currently working on additional studies comparing value-added measures to other measures of teaching, including classroom observations. He recently completed work on a four year project funded by the Institute of Education Sciences (IES) that developed alternative value-added models of teachers’ effectiveness. McCaffrey is also the principal investigator of a National Institute on Drug Abuse–funded study, and recently led RAND’s efforts as a major partner in the National Center on Performance Incentives, which conducted random control experiments to test the effects of using value-added to reward teachers with bonuses. He led an evaluation of the Pennsylvania Value-Added Assessment Pilot Program (PVAAS) and was the lead statistician on other randomized field trials of school-based interventions; including evaluations of the Cognitive Tutor geometry curriculum, the Project ALERT Plus middle and high school drug prevention program, and the teen dating violence prevention curriculum, Break the Cycle. McCaffrey received his Ph.D. in statistics from North Carolina State University.
KNOWLEDGE BRIEFS
- Do Value-Added Methods Level the Playing Field for Teachers?
- Will Teacher Value-Added Scores Change when Accountability Tests Change?
- Is Value-Added Accurate for Teachers of Students with Disabilities?
For more from this author, see:
- McCaffrey, Daniel F., Daniel Koretz, J. R. Lockwood and Laura S. Hamilton. Evaluating Value-Added Models for Teacher Accountability. Santa Monica, CA: RAND Corporation, 2004.
- McCaffrey, Daniel F. and Laura S. Hamilton. Value-Added Assessment in Practice: Lessons from the Pennsylvania Value-Added Assessment System Pilot Project. Santa Monica, CA: RAND Corporation, 2007.
- Daniel F. McCaffrey, Tim R. Sass, J. R. Lockwood, Kata Mihaly, “The Intertemporal Variability of Teacher Effect Estimate.” Education Finance and Policy, 4, (2009):572-606.
- McCaffrey, Daniel F. , J. R. Lockwood, and Han Bing. “Turning Student Test Scores into Teacher Compensation Systems.” Performance Incentives: Their Growing Impact on American K-12 Education. (2009): 113‒147.
- McCaffrey, Daniel F., J.R. Lockwood, Daniel M. Koretz, Laura S. Hamilton, and T.A. Louis. “Models for Value-Added Modeling of Teacher Effects.” Journal of Educational and Behavioral Statistics. 29. no. 1 (2004): 67‒101.
Stephen Raudenbush, Ed.D. is the Lewis-Sebring Distinguished Service Professor in the Department of Sociology, Professor at the Harris School of Public Policy Studies and is Chairman of the Committee on Education at the University of Chicago. He received an Ed.D in Policy Analysis and Evaluation Research from Harvard University. He is a leading scholar on quantitative methods for studying child and youth development within social settings such as classrooms, schools, and neighborhoods. He is best known for his work on developing hierarchical linear modes, with broad applications in the design and analysis of longitudinal and multilevel research. He is currently studying the development of literacy and math skills in early childhood with implications for instruction, and methods for assessing school and classroom quality. He is a member of the American Academy of Arts and Sciences and the recipient of the American Educational Research Association Award for distinguished contributions to educational research.
KNOWLEDGE BRIEF
- What Do We Know About the Long-term Impacts of Teacher Value-Added?
- What Do We Know About Using Value-Added To Compare Teachers Who Work in Different Schools?
- How Should Educators Interpret Value-Added Scores?
For more from this author, see:
- “What Are Value-Added Models Estimating and What Does This Imply for Statistical Practice? In the Journal of Educational and Behavioral Statistics.” Journal of Educational and Behavioral Statistics. 29. no.1 (2004):121-130.
- Reardon, Sean .F, and Stephen Raudenbush. “Assumptions of value‐added models for estimating school effects.” Education Finance and Policy. 4. no.4 (2009): 492-519.
- Raudenbush, Stephen. Education Testing Services, “Schooling, Statistics, and Poverty: Can We Measure School Improvement?” 2004.