LOADING

Type to search

Making the Most of Student Teaching: The Importance of Mentors and Scope for Change

Dan Goldhaber
American Institutes for Research/CALDER
University of Washington 

John Krieg
Western Washington University 

Natsumi Naito
University of Washington 

Roddy Theobald
American Institutes for Research/CALDER

CALDER Policy Brief No. 15-0519

Abstract

A growing literature documents the importance of student teaching placements for teacher development. Emerging evidence from this literature highlights the importance of the mentor teacher who supervises this placement. This brief provides an overview of this research, which suggests that teachers tend to be more effective when they student teach with a more effective mentor. We illustrate that there is ample scope for change in student teacher placements by using data from Washington State to demonstrate that there are far more highly effective teachers who could serve as mentors in each year than who actually do serve. We also discuss the considerable challenges to improvement efforts related to the need for better coordination between teacher education programs, K–12 school systems, and states. If policymakers value teacher candidate development equivalently to teacher inservice development, we argue that they should be willing to pay mentor teachers about 15 times more to recruit a highly effective teacher to host student teachers than the average current compensation for mentor teachers.

 

Introduction

A significant share of the overall investment in the development of public school teachers, almost $7 billion per year, is in their preparation before they become teachers. For the average teacher, this value represents about a third of the total financial investment in their professional development over the course of their careers (Goldhaber, Krieg, & Theobald, 2017). Until recently, most research on teacher development has focused on interventions targeting inservice teachers. This trend is beginning to change. New data systems that connect the preservice experiences of teacher candidates with their inservice outcomes have afforded the rapid expansion of empirical evidence on whether and how preservice experiences predict teacher effectiveness and performance.

While it might be considered early days in terms of evidence on the value of specific preservice experiences,[1] evidence suggests that some experiences have real value in promoting the development of teacher candidates. Whom teacher candidates work with as their mentor or “cooperating” teachers (the teachers tasked with overseeing a teacher candidates internship/student teaching experiences on the district side) appears to be particularly important (Goldhaber, Krieg, & Theobald, 2018; Ronfeldt, Brockman, & Campbell, 2018; Ronfeldt, Goldhaber, et al., 2018; Ronfeldt, Matsko, Greene, & Reininger, 2018). This finding is not terribly surprising because student teaching and the role of the mentor teacher have long been viewed by teacher organizations and qualitative researchers as foundational to the development of teacher candidates (AACTE, 2018; Anderson & Stillman, 2013; Clarke, Triggs, & Nielsen, 2014; Ganser, 2002; Graham, 2006; Hoffman et al., 2015; NCATE, 2010; Zeichner, 2009).

Yet the processes by which mentors are selected seem to be haphazard and are certainly not well-understood (Borko & Mayfield, 1995; Clark et al., 2014; Goldhaber, Grout, Harmon, & Theobald, 2018; NCTQ, 2016, 2017; St. John, Goldhaber, Krieg, & Theobald, 2018). In this policy brief, we first briefly review (Section 2) the last decade’s worth of quantitative evidence on the extent to which preservice experiences predict inservice teacher outcomes, zeroing in on several new quantitative studies that suggest the key role that mentors play. We then (Section 3) present new evidence on the extent to which it may be possible to improve the quality of teachers who supervise student teaching. We also describe challenges to improved coordination between teacher education programs, K–12 school systems, and states. Finally (Section 4), we suggest the implications for policy and practice.

Prior Literature: Preservice Experiences and Inservice Teacher Outcomes

A significant research base examines the test score outcomes of students who are assigned to teachers who enter the profession with different observable credentials.[2] The literature focusing on different routes into the teaching profession (e.g., Boyd, Grossman, Lankford, Loeb, & Wyckoff, 2009; Goldhaber & Brewer, 2000; Kane, Rockoff, & Staiger, 2008; Xu, Hannaway, & Taylor, 2011), including several randomized control trials (Chiang, Clark, & McConnell, 2017; Constantine et al., 2009; Glazerman, Mayer, & Decker, 2006), tends to find relatively little difference in student test results according to route into the profession. Related literature focuses on how well the licensure tests, used to determine whether individuals are eligible to participate in the teacher workforce, predict teacher effectiveness. This research (e.g., Clotfelter, Ladd, & Vigdor, 2007, 2010; Goldhaber, 2007) tends to find small but statistically significant positive relationships between licensure test performance and student achievement.

Studies across several states have explored the extent to which the credentialing TEP explains the observed variation in teacher effectiveness (i.e., value added). While there are some differences across studies in the findings and conclusions, most studies distinguish only a few programs from the average. For instance, von Hippel and Bellows (2018) reanalyzed data across six states and found that differences in teacher value added across TEPs are “negligible.”[3]

A few studies focus on the features of teacher education and their relationships to teacher effectiveness (value added or student achievement). In a seminal study, Boyd, Grossman, Lankford, Loeb, and Wyckoff (2009) linked comprehensive survey data on the preparation experiences of new teachers in New York City Public Schools to their early career value added to student achievement. The authors reported that many aspects of teacher preparation, including the amount of focus on practice, the alignment between preservice curriculum and their current teaching placement, and whether the teacher was required to do a student teaching placement are all positively predictive of their value added upon entering the workforce. Ronfeldt (2012, 2015) extended this work with a particular focus on candidates’ student teaching placements; he found that teachers who student taught in schools with less teacher turnover and more staff collaboration are more effective once they enter the workforce.

Newer evidence is based on data about teacher candidates, not just those teacher candidates that are observed in the teacher workforce. For instance, Goldhaber, Krieg, et al. (2017) used data on all student teaching placements (i.e., not just information on teacher candidates who end up employed as teachers), which allowed them to consider candidates who do and do not enter the teaching workforce and thus account for bias associated with selection into the teaching workforce. They found that teachers tend to be more effective when they teach in a school with similar student demographics as their student teaching school. More recent work by Goldhaber, Krieg, and Theobald (2018) and Ronfeldt, Brockman, and Campbell (2018) focused specifically on the characteristics of mentor teachers, and both studies reported that teachers who are supervised by a more effective mentor teacher during student teaching tend to be more effective when they enter the workforce.

Figure 1: Estimated relationships between preservice education experience and value added in math

Note. SD = standard deviation, VA = value added.

While all the research mentioned previously is useful for improving the preservice experiences of teacher candidates, we focus on the value of mentor teachers because the estimated effects of working with a skilled mentor seem to be much larger than the estimated effects of changing other types of preservice experiences. This is illustrated in Figure 1, which compares estimated effect sizes on student achievement in math from the papers discussed previously.[4] As the figure illustrates, the predicted change in teacher value added associated with a one standard deviation increase in mentor teacher value added is about 0.04 standard deviations of student performance (Goldhaber, Krieg, & Theobald, 2018), which is considerably larger than the other effect sizes in this literature. But while upgrading mentor teaching appears theoretically promising, policymakers and practitioners might reasonably question whether there are enough skilled mentors and, relatedly, whether they can be induced to serve in a mentor teacher role.

The Potential for Changes to the Match Between Mentors and Teacher Candidates

There are only a few studies that explore what predicts the likelihood that teacher candidates are matched to particular internship schools and mentor teachers. The limited quantitative evidence on this topic (Krieg, Theobald, & Goldhaber, 2016; Krieg, Goldhaber, & Theobald, 2018) suggests that geographic proximity to a TEP and TEP homophily between the TEP of the mentor and the student teacher are the strongest predictors of where and with whom student teaching occurs. These quantitative findings are supported by qualitative evidence (Maier & Youngs, 2009; St. John et al., 2018) of the important role that social networks play in student teaching placements.

Krieg et al. (2018) found that only 3% to 4% of teachers serve as mentors in an average year, which closely mirrors back-of-the-envelope estimates of the national percentage.[5] Thus, at first it seems straightforward that there is a significant scope for change in mentor assignments. Yet this is not entirely clear as we recognize that it is more logistically challenging for TEPs to oversee internships that are more geographically dispersed. We are particularly interested in whether there are more effective teachers in the schools and districts that already tend to host student teachers who might serve as mentors.

To assess the potential to change who serves as a mentor teacher, we use data on student teaching placements from 15 TEPs in Washington State collected as part of the Teacher Education Learning Collaborative (TELC); this same data set was used in Goldhaber, Krieg, and Theobald (2018) and Krieg et al. (2018) discussed previously. Graduates of these TEPs comprise more than 81% of the new teachers prepared in Washington State between 2010 and 2015, and 92% of the new teachers in the western half of the state. The TELC data therefore likely represent nearly a census of student teaching placements in the western half of the state during these years, so we focus this analysis on school districts in this region.

Figure 2: Distribution of math value added for teachers within 50 miles of a TEP who do and do not serve as mentor teachers

Figure 2 compares the value added of math teachers in Grades 4–8 within 50 miles of a TEP who do and do not host a student teacher between 2010 and 2015.[6] We focus on this group of teachers because more than 99% of all student teacher placements in the TELC data are within 50 miles of a participating TEP, and value added can be calculated for nearly all math teachers in these grades. Figure 2 shows that, consistent with Krieg et al. (2018), mentor teachers in this sample are somewhat more effective than teachers in this sample who do not serve as mentors in the same year. But, more striking is the fact that more than 40% of math teachers within 50 miles of a TEP who do not host a student teacher are more effective than the average math teacher who does serve as a mentor teacher.

Table 1: Number of Grade 4–8 math teachers who do and do not serve as a mentor teacher

Number of math teachers in Grades 4-8 who: Serve as mentor teacher Do not serve as mentor teacher Do not serve as mentor teacher, >1 SD above mean VA Do not serve as mentor teacher, >2 SD above mean VA
Same district as TEP 167 3088 474 70
Within 25 miles of TEP 295 7826 1177 214
Within 50 miles of TEP 301 8518 1283 234

Note. SD = standard deviation, TEP = teacher education program, VA = math value added.

To make these numbers more concrete, Table 1 reports the number of mentor and potential mentor teachers within the same district as a TEP, teaching at a district within 25 miles of a TEP, and within 50 miles of a TEP. We focus on the group of teachers who do not serve as a mentor teacher and calculate how many of them are at least 1 or 2 standard deviations more effective than the average teacher in the state. We find that there are two to four times as many effective teachers (at least 1 standard deviation more effective than the average mentor teacher) within any of these geographic areas than are currently being used as mentors, and nearly as many highly effective mentors (at least 2 standard deviations more effective than the average mentor teacher) than are currently hosting student teachers at all. These findings suggest that, contrary to some anecdotal evidence (e.g., St. John et al., 2018), many effective teachers are not currently serving in the mentor teacher role in Washington.

Challenges and Implications for Policy and Practice

While the evidence presented in the previous section paints a rosy picture of the potential scope of change for student teacher placements and mentor teacher assignments, recent qualitative evidence from Washington (Goldhaber, Grout, et al., 2018; St. John et al., 2018) suggests that there are also considerable challenges to changing the status quo for placement processes. Specifically, St. John et al. (2018) analyzed interviews with the individuals responsible for student teacher placements in TEPs and school districts in Washington, and Goldhaber, Grout, et al. (2018) described potential reasons for lack of take-up in an intervention in Spokane Public Schools in Washington in which effective teachers (according to district performance evaluations) were encouraged to host a student teacher. Both studies highlight skepticism within TEPs and districts about whether teachers who are effective according to observable measures (e.g., performance evaluations or value added) are also effective mentors for student teachers. St. John et al. (2018) further documented the considerable barriers to effective communication between TEPs and districts about student teaching placements, while Goldhaber, Grout, et al. (2018) noted some teachers’ discomfort at being differentiated from their peers (even in a positive way) in being recruited to serve as a mentor teacher.

Perhaps most importantly, while mentor teachers may wish to give back to the profession by contributing to the development of teacher candidates, there is little financial incentive for teachers to serve as a mentor teacher. For instance, Fives, Mills, & Dacey (2016) reported that an average mentor teacher receives at just over $200 in compensation. This value is a far cry from our back-of-the-envelope calculation of what effective mentor teachers are “worth” to schools and districts. Specifically, to calculate the value of effective mentors, we return to the result in Goldhaber, Krieg, and Theobald (2018) that the average teacher who is mentored by a highly effective teacher (2 standard deviations above average value added) begins their career with the same effectiveness as the average third-year teacher in the state. In Washington State, the average third-year teacher is paid $3,500 more than the average first-year teacher.[7] In other words, to the degree that the financial reward for teaching experience is a reflection of the value that policymakers place on the increased value-added effectiveness of third-year teachers over novice teachers, they should be willing to invest substantially more—roughly 15 times more—to encourage effective teachers to become mentors. While this may seem like a substantial investment, estimates from Chetty, Friedman, and Rockoff (2014) suggest that the present value to students of having a first-year teacher who student taught with a highly effective mentor teacher relative to an average mentor teacher is roughly $70,000 in lifetime earnings across an average classroom. This estimate may actually considerably understate the overall value of effective mentor teachers.[8]

Teacher value-added effectiveness is only one dimension of teacher quality, and as we noted, being an effective teacher does not necessarily equate to being an effective mentor. The evidence about the importance of mentor teachers for teacher candidate development is compelling, and the best evidence we can generate suggests substantial underinvestment in this crucial role. Consequently, we conclude this brief by advocating for more focus on research into what constitutes effective teacher candidate mentorship and more policy attention (and potentially funding for teacher mentors) on how to ensure that teacher candidates receive high-quality mentorship during their student teaching.

 

References

American Association of Colleges of Teacher Education (AACTE). (2018). A pivot towards clinical practice, its lexicon, and the renewal of educator preparation: A report of the AACTE Clinical Practice Commission. Washington, DC: Author.

Anderson, L. M., & Stillman, J. A. (2013). Student teaching’s contribution to preservice teacher development: A review of research focused on the preparation of teachers for urban and high-needs contexts. Review of Educational Research, 83(1), 3–69.

Borko, H., & Mayfield, V. (1995). The roles of the cooperating teacher and university supervisor in learning to teach. Teaching and Teacher Education, 11(5), 501–518.

Boyd, D. J., Grossman, P. L., Lankford, H., Loeb, S., & Wyckoff, J. (2009). Teacher preparation and student achievement. Educational Evaluation and Policy Analysis, 31(4), 416–440.

Chetty, R., Friedman, J. N., & Rockoff, J. E. (2014). Measuring the impacts of teachers II: Teacher value-added and student outcomes in adulthood. American Economic Review, 104(9), 2633–2679.

Chiang, H. S., Clark, M. A., & McConnell, S. (2017). Supplying disadvantaged schools with effective teachers: Experimental evidence on secondary math teachers from Teach For America. Journal of Policy Analysis and Management, 36(1), 97–125.

Clarke, A., Triggs, V., & Nielsen, W. (2014). Cooperating teacher participation in teacher education: A review of the literature. Review of Educational Research, 84(2), 163–202.

Clotfelter, C. T., Ladd, H., & Vigdor, J. (2007). Teacher credentials and student achievement: Longitudinal analysis with student fixed effects. Economics of Education Review, 26(6), 673–682

Clotfelter, C. T., Ladd, H. F., & Vigdor, J. L. (2010). Teacher credentials and student achievement in high school a cross-subject analysis with student fixed effects. Journal of Human Resources, 45(3), 655–681.

Constantine, J., Player, D., Silva, T., Hallgren, K., Grider, M., & Deke, J. (2009). An evaluation of teachers trained through different routes to certification (No. NCES 2009-4043). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.

Fives, H., Mills, T. M., & Dacey, C. M. (2016). Cooperating teacher compensation and benefits: Comparing 1957–1958 and 2012–2013. Journal of Teacher Education, 67(2), 105–119.

Ganser, T. (2002, December). How teachers compare the roles of cooperating teacher and mentor. The Educational Forum, 66(4), 380–385.

Glazerman, S., Mayer, D., & Decker, P. (2006). Alternative routes to teaching: The impacts of Teach for America on student achievement and other outcomes. Journal of Policy Analysis and Management, 25(1), 75–96.

Goldhaber, D. (2007). Everyone’s doing it, but what does teacher testing tell us about teacher effectiveness? Journal of Human Resources, 42(4), 765–794.

Goldhaber, D. (2019). Evidence-based teacher preparation: Policy context and what we know. Journal of Teacher Education, 70(2), 90–101.

Goldhaber, D. D., & Brewer, D. J. (2000). Does teacher certification matter? High school teacher certification status and student achievement. Educational Evaluation and Policy Analysis, 22(2), 129–145.

Goldhaber, D., Cowan, J., & Theobald, R. (2017). Evaluating prospective teachers: Testing the predictive validity of the edTPA. Journal of Teacher Education, 68(4), 377–393.

Goldhaber, D., Gratz, T., & Theobald, R. (2017). What’s in a teacher test? Assessing the relationship between teacher licensure test scores and student secondary STEM achievement and course taking. Economics of Education Review, 61, 112–129.

Goldhaber, D., Grout, C., Harmon, K., & Theobald, R. (2018). A practical guide to challenges and opportunities in student teaching: A school district’s perspective. CALDER Working Paper No. 205-1018-1. Washington, DC: American Institutes for Research.

Goldhaber, D., Krieg, J. M., & Theobald, R. (2017). Does the match matter? Exploring whether student teaching experiences affect teacher effectiveness. American Educational Research Journal, 54(2), 325–359.

Goldhaber, D., Krieg, J., & Theobald, R. (2018). Effective like me? Does having a more productive mentor improve the productivity of mentees? CALDER Working Paper No. 208-1118-1. Washington, DC: American Institutes for Research.

Graham, B. (2006). Conditions for successful field experiences: Perceptions of CTs. Teaching and Teacher Education, 22(8), 1118–1129.

Hoffman, J. V., Wetzel, M. M., Maloch, B., Greeter, E., Taylor, L., DeJulio, S., & Vlach, S. K. (2015). What can we learn from studying the coaching interactions between CTs and preservice teachers? A literature review. Teaching and Teacher Education, 52, 99–112.

Kane, T. J., Rockoff, J. E., & Staiger, D. O. (2008). What does certification tell us about teacher effectiveness? Evidence from New York City. Economics of Education Review, 27(6), 615– 631.

Krieg, J. M., Goldhaber, D.. & Theobald, R. (2018). Teacher candidate apprenticeships: Assessing the who and where of student teaching. CALDER Working Paper No. 206-1118-1. Washington, DC: American Institutes for Research.

Krieg, J. M., Theobald, R., & Goldhaber, D. (2016). A foot in the door: Exploring the role of student teaching assignments in teachers’ initial job placements. Educational Evaluation and Policy Analysis, 38(2), 364–388.

Maier, A., & Youngs, P. (2009). Teacher preparation programs and teacher labor markets: How social capital may help explain teachers’ career choices. Journal of Teacher Education, 60(4), 393–407.

National Center for Education Statistics (NCES). (2018). Fast facts: Back to school statistics. Retrieved from https://nces.ed.gov/fastfacts/display.asp?id=372

National Council for Accreditation of Teacher Education (NCATE). (2010). Transforming teacher education through clinical practice: A national strategy to prepare effective teachers (Report of the Blue Ribbon Panel on clinical preparation and partnerships for improved student learning). Washington, DC: Author.

National Council on Teacher Quality (NCTQ). (2016). A closer look at student teaching: Undergraduate elementary programs. Washington, DC: Author.

National Council on Teacher Quality (NCTQ). (2017). A closer look at student teaching: Undergraduate secondary programs. Washington, DC: Author.

Ronfeldt, M. (2012). Where should student teachers learn to teach? Effects of field placement school characteristics on teacher retention and effectiveness. Educational Evaluation and Policy Analysis, 34(1), 3–26.

Ronfeldt, M. (2015). Field placement schools and instructional effectiveness. Journal of Teacher Education, 66(4), 304–320.

Ronfeldt, M., Brockman, S., & Campbell, S. (2018). Does cooperating teachers’ instructional effectiveness improve preservice teachers’ future performance? Educational Researcher, 47(7).

Ronfeldt, M., Goldhaber, D., Cowan, J., Bardelli, E., Johnson, J., & Tien, C. D. (2018). Identifying promising clinical placements using administrative data: Preliminary results from ISTI Placement Initiative Pilot. CALDER Working Paper No. 189. Washington, DC: American Institutes for Research.

Ronfeldt, M., Matsko, K. K., Greene Nolan, H., & Reininger, M. (2018). Who knows if our teachers are prepared? Three different perspectives on graduates’ instructional readiness and the features of preservice preparation that predict them (CEPA Working Paper No.18-01). Retrieved from https://cepa.stanford.edu/wp18-01

St. John, E., Goldhaber, D., Krieg, J., & Theobald, R. (2018). How the match gets made: Exploring student teacher placements across teacher education programs, districts, and schools. CALDER Working Paper 111018. Washington, DC: American Institutes for Research.

Title II. (2017). Completers, by state, by program level. Retrieved from https://title2.ed.gov/Public/DataTools/Tables.aspx

U.S. Department of Education (2019). Title II tips for reporting. Frequently asked questions. Retrieved from https://title2.ed.gov/Public/TA/FAQ.pdf

von Hippel, P. T., & Bellows, L. (2018). How much does teacher quality vary across teacher preparation programs? Reanalyses from six states. Economics of Education Review, 64, 298–312.

Xu, Z., Hannaway, J., & Taylor, C. (2011). Making a difference? The effects of Teach For America in high school. Journal of Policy Analysis and Management, 30(3), 447–469.

Zeichner, K. M. (2009). Teacher education and the struggle for social justice. New York, NY: Routledge.

 

 Notes

[1] A Google Scholar search on “teacher professional development,” a popular inservice intervention, reveals more than 100,000 papers on the topic.

[2] Prior work from Washington found that scores on the basic skills tests required for teacher education program (TEP) entry (Goldhaber, Gratz, et al., 2017) and the edTPA portfolio-based assessment (Goldhaber, Cowan, et al., 2017) are positively predictive of future teaching effectiveness.

[3] For more information on this literature, see Goldhaber (2019).

[4] These are all traditional estimates of effect sizes (estimates of the effect on math value added of a 1 SD change in an input) with the exception of the effect from Goldhaber, Krieg, et al. (2017). This estimate of the “optimal match” is based on the authors’ calculation assuming that student teaching occurs in schools that are estimated to be optimal for value added according to Figure 5 in Goldhaber, Krieg, et al. (2017).

[5] There are about 3.2 million public school teachers in the United States (NCES, 2018) and approximately 130,000 graduates of traditional (college- and university-based) TEPs, who require student teaching, in the most recent year of national data reporting (Title II, 2017). These data suggest that the percentage of teachers nationally who host a student teacher in a given year is about 4%.

[6] For details on the data used to estimate teacher value added and model specification, see Krieg et al. (2018).

[7] This is calculated from the average salaries of full-time classroom teachers with zero or two years of teaching experience in the S-275, a personnel data set of all public employees in Washington.

[8] This calculation takes the estimate from Chetty et al. (2014) of the impact of a one standard deviation increase in teacher effectiveness on the present value gain in lifetime income for the average age 12 student, $7,000; multiplies by the average class size in the Washington data, 26.7 students; and then multiplies by 0.38 (the expected increase in standard deviations of teacher value added associated with a two standard deviation increase in mentor value added).