LOADING

Type to search

What Do We Know About the Effects of Clinical Practice Experiences and Teacher Performance?

What Do We Know About the Effects of Clinical Practice Experiences and Teacher Performance?

Dan Goldhaber
CALDER

Venessa Keesler
Michigan Department of Education

CALDER Policy Brief No. 19-1119

Highlights

  • The last few years have witnessed a significant new body of research connecting features of clinical practice that are correlated with both value-added measures of effectiveness and broader measures of teacher job performance.
  • Teachers appear to benefit from completing student teaching in schools that are more collegial and serve students who are demographically similar to those they will serve in their first jobs.
  • Evidence suggests that the most promising aspect of clinical practice associated with increasing the job performance of new teachers is having teacher candidates be supervised by mentors who have better job performance ratings and/or who are more effective based on value-added measures.
  • Although we have recently learned more about the connections to clinical experiences and job performance, the literature in this area is relatively sparse, greatly hindering the ability to improve teacher preparation. In particular, we know very little about the value of multiple types of field experiences, the role of field instructors, and the nature of feedback that student teachers receive about their skills or skill development while completing their clinical work.

Executive Summary

Theory suggests that clinical experiences—sometimes referred to as pre-service teaching, internships, or student teaching—affect teacher effectiveness by connecting teacher preparation coursework to PK–12 students and schools. Until recently, we have had little quantitative evidence indicating that these clinical experiences matter. This brief provides an overview of some of the research documenting the connections between different components of clinical experience—such as school culture, performance assessments, mentor teacher performance, and congruence between student teaching and in-service environments—and the in-service outcomes of teachers. Recent evidence shows that the environment in the schools in which clinical practice occurs, the alignment between the student demographics of internship schools and early career schools, is associated with the later effectiveness of those teacher candidates who go on to become teachers. There is also increasing evidence pointing toward to the value of working with an effective and/or high-performing mentor (also known as “cooperating”) teacher. This brief also highlights areas where less is definitively known (or no quantitative evidence exists), such as whether the introduction of assessments (like edTPA) improve teacher effectiveness.

What Is the Issue?

A critical element of formalized teacher preparation involves clinical practice experiences—sometimes referred to as pre-service teaching, internships, or student teaching. Most teachers agree that quality teacher preparation must involve a clinically rich program of study (Dennis et al., 2017) that cohesively connects teacher preparation coursework to PK–12 students and schools. This connection is intended to provide teacher candidates with a deliberate series of mediated, structured experiences (Darling-Hammond, 2018; Grossman, 2010; Zeichner, 2010) that provide opportunities for teacher candidates to engage PK–12 students with a commitment to their learning under the supervision of an experienced mentor (Grossman, 2010). Through these experiences, teacher candidates also connect theory to practice through an immersion into the materials of teaching, which can include authentic student work samples, assessment results, or data sets (Grossman, 2010; Darling-Hammond, 2018).[1] Clearly, getting clinical experiences right is an important task that states and teacher preparation programs (TPPs) face. This entails not only understanding what works in developing teacher candidate’s skills, but also striking a balance with time and costs to programs, school districts and, potentially, individual teacher candidates. For example, although more clinical practice (e.g., through teacher residency models) might be beneficial, longer student teaching internships can impose costs on teacher candidates in the form of delayed workforce entry.

Unfortunately, for stakeholders seeking to improve clinical experiences, the empirical evidence is limited. One reason for this is that most states have limited ability to connect critical features of clinical experiences to in-service outcomes of teachers and their students—for example, how much effect do the kinds of classrooms in which clinical experiences occur or the amount of time spent with mentors on various tasks affect student and teacher outcomes (Goldhaber, 2019)? This makes it challenging to use evidence from in-service outcomes to inform pre-service practices that should, ideally, help in the development of teacher candidates.

Another challenge is more in the statistical weeds. It is difficult empirically to separate out selection effects (i.e., who ends up in particular programs or internships) from human capital training effects (i.e., the development of teacher candidate capacities that are directly related to the clinical experiences they have).[2] Consequently, the evidence about what conclusively works in terms of clinical experiences is relatively sparse (and none of it is definitively causal).

Much of what shapes clinical experiences is determined on the ground by the relationship between TPPs and the local schools in which internships take place; more formally, clinical experiences are governed by agreements between TPPs and local school systems. Thus, it is not surprising that many aspects of clinical practices vary by TPPs within states (St. John et al., 2018). But states also play a higher-level policy role by accrediting teacher education programs and providing minimum requirements for various aspects of what clinical experiences must entail. Some states, such as Louisiana (discussed below), take a more active role in shaping clinical practice.

As we describe below, although we have recently learned a good deal about the importance of particular aspects of a teacher candidate’s clinical experience, much is left to learn about student teaching that could inform policy and practice.

What Is Known?

As noted above, clinical experiences are widely viewed as “a key component—even ‘the most important’ component of—pre-service teacher preparation” (Anderson & Stillman, 2013, p. 3), but until recently, little quantitative evidence has supported this position.[3] However, a number of new, large-scale quantitative studies focus on the connections between different aspects of clinical preparation and both teacher effectiveness and recorded job performance of those student teachers who later become teachers of record.[4] Most of these studies focus on one or more of five different areas of clinical practice: (1) measures of the supervision of student teaching experiences; (2) measures of the schools in which clinical experiences occur; (3) artifacts/performance assessments completed during student teaching; (4) measures of the congruence between student teaching and in-service teaching jobs; and (5) characteristics and measures of the job performance and effectiveness of the mentor teacher (also known as cooperating teacher).[5]

This new line of quantitative work was kicked off about a decade ago in research by Donald Boyd and colleagues (Boyd et al., 2009), who collected information about teacher preparation program features and surveyed first-year teachers about their teacher education programs, including aspects of clinical experiences. Having a student teaching placement does not appear predictive of later teacher effectiveness, but Boyd et al. found that first-year teachers are more effective (in value-added terms) when TPPs exercise greater oversight over clinical experiences. The magnitude of this effect is a bit tricky to quantify because oversight is measured by a composite score; however, the impacts on student test scores were similar to those experienced when students were assigned to a teacher with more experience (a teacher with a year or two of experience) relative to a novice teacher.[6] That said, no relationship appears to exist between second-year teacher placements and effectiveness, which may indicate that teachers learn much of their skills during their first year of teaching.

Ronfeldt (2012, 2015) and Goldhaber et al. (2018) examined measures of the culture of collaboration in the schools where student teaching occurs and found that teacher candidates who have clinical experiences in schools with lower relative attrition (a measure of the culture in the school) turn out to be more effective once they have classroom responsibilities of their own.[7] Ronfeldt (2015) also offers evidence that the school-level value added where clinical practice occurs is related to the later effectiveness of teachers.

A number of states require that student teachers complete performance assessments; indeed, passing the edTPA has been rapidly adopted by states and TPPs as a requirement for receiving a teaching credential (Hutt et al., 2018). These “authentic” assessments are connected to the skills that teacher candidates demonstrate while completing clinical practice (e.g., lesson plans, student work samples). Research on the edTPA (Bastian et al., 2018; Goldhaber et al., 2017) found positive, but not consistently statistically significant, relationships between edTPA performance and teacher effectiveness; this somewhat ambiguous result may indicate that edTPA does not capture whether teachers know these skills, or if these skills are not related to student outcomes.[8] Recent research on a similar pre-service performance assessment developed in Massachusetts, the Candidate Assessment of Performance (or CAP), found that candidates find stronger relationships between CAP performance and the state’s in-service evaluation system (Chen, Cowan, Goldhaber, & Theobald, 2019).[9]

Not surprisingly, the few studies that focus on the congruence between student teaching and in-service teaching jobs found evidence supporting benefits of greater congruence between the two. The aforementioned Boyd et al. study (2009), for instance, found significant positive student math test (but not ELA) achievement effects associated with surveyed teachers reporting feeling that congruence existed between experiences (e.g., grade level, types of supervision and feedback) while student teaching and experiences in schools as teachers of record. Goldhaber et al. (2017) used data that tracked teacher candidates from their student teaching school into the teaching profession. They found that early-career teachers tended to be more effective when the student demographics (racial/ethnic composition or free/reduced price lunch status) of the schools in which they completed their clinical practice were more similar to the schools in which they found employment. This likely reflects the fact that student teachers develop teaching skills specific to particular types of students.

Within the field, the belief that mentors “influence the career trajectory of beginning teachers for years to come” (Ganser, 2002, p. 380) is widespread, and a number of new studies examine this quantitatively, assessing the extent to which the characteristics, job performance, and effectiveness of mentor teachers predict the later performance of the teacher candidates they supervise during student teaching. Little evidence exists that characteristics, such as experience or degree level, of mentor teachers (Goldhaber et al., 2018) are related (in expected ways) to later mentee job performance. But value-added and summative job performance measures of mentors do seem to be important aspects of student teachers’ clinical experiences. Ronfeldt et al. (2018), for instance, found positive correlations between the observational ratings of mentor teachers and the observational teacher candidates they mentored who eventually became teachers. Both Goldhaber et al. (2018) and Ronfeldt et al. (2018) found that student teachers who were mentored by teachers with higher value added had higher value added when they became teachers themselves.[10]

Beyond the studies in these five areas, a bit of evidence exists on the length of clinical practice. Although the length of practice is predictive of teachers’ feelings of preparation (Ronfeldt, Schwartz, & Jacob, 2014), several studies (Ronfeldt & Reininger, 2012; Ronfeldt, 2014, 2015) found that it is not a statistically significant predictor of teacher performance. Still, on the whole, the evidence presented in this subsection strongly suggests that some features of clinical practice could be leveraged to support the development of teacher candidates by improving (1) the culture of collaboration, (2) performance assessments, (3) congruence with in-service teaching jobs, and (4) mentor teachers. The estimated magnitude of the mentor value-added effects are particularly large relative to the estimates of other clinical experiences, and significant scope for improvement exists because less than 4 percent of teachers typically serve as mentors (Goldhaber et al., 2019). Taken together, these findings suggest that we should focus particular attention on finding the right mentors to supervise student teaching.

What Is Not Known?

Before focusing on aspects of clinical practices that arguably need further investigation, it is worth emphasizing that caution should be exercised when interpreting the research described above. One reason for this is simply that relatively few studies exist on the aforementioned features of clinical experiences. We know a lot more than five years ago, but the state of the literature on clinical practice could still be described as quite thin. Relatedly, much of what we do know is based on value-added estimates of those teacher candidates who become teachers. Although we believe that these estimates are an important measure of the contribution that teachers make toward student achievement (Goldhaber & Ozek, 2019), they are also limited for two reasons. First, they are based on student achievement tests and only cover a slice—typically 20 to 30 percent—of teachers in the workforce. Second, they tend to be based on teachers at the elementary- and middle-school levels.

It is also important to recognize that the findings described above may not be causal in the sense that they show how particular experiences change teacher candidates. As an example, although it is no great leap to believe that having highly effective teachers serve as mentors is beneficial in terms of the skill development of teacher candidates, it is also possible that teacher candidates with strong preexisting skill sets seek out effective mentor teachers to work with. Were this the case, we would expect to see a link between the effectiveness of teachers who serve as mentors to the eventual effectiveness of the student teachers they supervise, even if working with a more effective teacher as a mentor does not lead to the greater skill development of a student teacher. To gain a firmer handle on whether and how specific clinical experiences change individual teacher candidates, one needs well-designed randomized controlled trials focused on clinical experience interventions (Goldhaber & Ronfeldt, 2018).

However, other aspects of clinical experiences have little to no empirical evidence. In particular, although we have some evidence from studies assessing outcomes of teachers who had different clinical practice experiences almost no evidence exists that assesses the effects of new initiatives. As an example, most studies focusing on edTPA consider whether edTPA scores of teacher candidates are associated with teacher effectiveness, but not whether the adoption of performance-based assessments like the edTPA affect the eventual quality of teachers. In other cases, no evidence exists. As described above, for instance, an emerging literature exists on the import of having effective teachers serve as mentors, but field instructors (TPP employees) also play a key role in overseeing clinical practice. In addition, currently, no quantitative studies exist that investigate whether the attributes of field instructors are associated with teacher outcomes.

We also know little about the timing of internships. Although most formalized student teaching placements (i.e., those that satisfy state requirements for hours of clinical practice) occur toward the end of a student’s time of TPP enrollment, some TPPs provide teacher candidates with far earlier experiences in a K–12 classroom. The UTeach program (which is an undergraduate TPP), for instance, which has been found to credential effective STEM teachers, provides teacher candidates with far earlier clinical practice experiences (Backes et al., 2018). The notion behind this is that the early experiences provide teacher candidates with insights into whether they will find teaching a desirable career so that those for whom it is not appealing have time to change majors.

Policy Levers and Policy-Making Challenges

When it comes to making changes in the standards and expectations surrounding clinical practice, states have several key levers available to them, the primary one being program approvals. In order to offer preparation programs to aspiring educators, TPPs must get those programs approved by the state. States develop preparation program standards and requirements for program approval, which often include standards and expectations for clinical practice. For example, in Michigan, TPPs are required to offer 600 hours of clinical practice for all candidates, including a mix of placement locations and experiences. States can then use accountability measures, like accreditation and the annual reporting required under the Higher Education Opportunity Act to hold TPPs accountable for implementing those requirements. In addition, as mentioned above, several states have adopted assessments like edTPA or CAP in Massachusetts. Failure to do so can result in a loss of program approval. States also can pursue or use existing legislative requirements, such as requirements regarding the qualifications of mentor teachers. However, legislative solutions are often more challenging to achieve, harder to change when the needs of the field shift, and in many cases, legislation is not necessary as authority can rest with the state agency.

However, challenges still exist with this policy lever. Implementing the clinical experiences component of preparation programs includes messy and distributed governance—including the state, the programs themselves, the placement sites—all of which have both needs and expectations of the placement. These placements rely on collaborative partnerships between the preparation program and the placement sites, which take time to build and maintain, and can be hard to develop when new, more diverse types of placements are required. This is particularly true if those placement opportunities are not easily accessible for a given program. Moreover, the lack of high-quality mentors can be a challenge—for example, some schools may lack high-quality mentors because they are geographically isolated or may have high teacher turnover. These arguably are some of the sites where we want to ensure that teacher candidates have clinical placements, but we face great barriers when trying to attain those placements. Future research could explore whether increasing mentor pay could motivate more teachers to volunteer for mentorship in such settings.

Although states can and do require certain elements of clinical practice, it is extraordinarily difficult to measure the quality and the actual experiences of those placements, which in turn makes it challenging to understand what works and what does not with clinical practice. Most states do not collect data on where these placements occur, let alone more detailed information about the activities, experiences, and components of those experiences. Moreover, this kind of data collection would represent a vast increase in required data submissions, which in turn comes with both technical and human costs at every level of the system (e.g., candidate, preparation program, and the state). So, although states are able to require different types of clinical experiences, they are far less able to understand and evaluate the relative effectiveness of various components of clinical practice.

One area of active policy change is in expanding the amount of time teacher candidates have for clinical experiences. Louisiana, for instance, recently implemented TPP reforms requiring year-long clinical practice “residencies” that are supervised by dedicated, and specifically trained, mentor teachers.[11] This type of investment in clinical practice represents a significant state investment. These Louisiana reforms have intuitive appeal and they are in line with guidance by the Council of Chief State School Officers. Yet they are also costly both to the state and local TPPs (Hannan et al., 2019). Given the costs of mentor training and extended clinical practice time, we might expect that the sustainability of initiatives like this will depend on credible empirical evidence connecting these types of reforms to in-service teacher outcomes.[12]

Some TPPs and states are focusing on using clinical experiences as a way to expose educators to more types of learning environments and student needs.[13] Programs, for instance, might have a first field placement in a low-poverty school and a second placement in a high-poverty school. This could benefit teachers by giving them the opportunity to practice their teaching skills with different types of students, or even help address areas of teacher shortage by encouraging teachers to consider a greater array of job options. The flip side, however, is that teacher candidates are getting less focused experiences with particular kinds of students. We do not know to what extent these types of tradeoffs are important.

It is also important to note that many states are facing a teacher shortage (either real, perceived, or some combination of the two). In this environment where having a sufficient quantity of teachers to meet demand is a concern, significant concerns may also exist about implementing policies that are focused on increasing quality, but also add barriers to entry into the profession (either actual or feared).

A high-quality longitudinal study performed in partnership with several states is needed in order to truly understand the educator pipeline overall, and in the context of this policy brief, clinical experiences in particular. A longitudinal mixed-methods study that follows a cohort from high school through choosing the teaching profession, into preparation and their experiences there, and then into the field to understand placement, mentoring/onboarding, evaluation, and school climate and culture and how all of those act on the educator over the lifespan of their career would help us truly understand more about not only what is happening, but how and why.

 

References

Anderson, L. M., & Stillman, J. A. (2013). Student teaching’s contribution to preservice teacher development: A review of research focused on the preparation of teachers for urban and high-needs contexts. Review of Educational Research83(1), 3–69. https://doi.org/10.3102/0034654312468619

Backes, B., Goldhaber, D., Cade, W., Sullivan, K., & Dodson, M. (2018). Can UTeach? Assessing the relative effectiveness of STEM teachers. Economics of Education Review64, 184–198.

Bastian, K. C., Lys, D., & Pan, Y. (2018). A framework for improvement: Analyzing performance-assessment scores for evidence-based teacher preparation program reforms. Journal of Teacher Education69(5), 448–462. https://doi.org/10.1177/0022487118755700

Boyd, D., Grossman, P. L., Lankford, H., Loeb, S., & Wyckoff, J. (2009). Teacher preparation and student achievement. Educational Evaluation and Policy Analysis, 31(4), 416–440.

Chen, B., Cowan, J., Goldhaber, D., & Theobald, R. (2019). From the clinical experience to the classroom: Assessing the predictive validity of the Massachusetts candidate assessment of performance. (CALDER Working Paper No. 223-1019).

Darling-Hammond, L. (2018). Education and the path to one nation, indivisible. Palo Alto, CA: Learning Policy Institute.

Darling-Hammond, L (ed.). (2000). Studies of excellence in teacher education. Washington, DC: American Association of Colleges of Teacher Education.

Dennis, D. V., Burns, R.W., Tricarico, K., van Ingen, S., Jacobs, J., & Davis, J. (2017).

Problematizing clinical education: What is our future? (pp. 1–20).  In R. Flessner & D. R. Lecklider (Eds.), The power of clinical preparation in teacher education: Embedding teacher preparation within P-12 school contexts, Rowman & Littlefield Publishers.

edTPA. (2015). Educative assessment & meaningful support: 2014 edTPA administrative report. September 2015.

Ganser, T. (2002). How teachers compare the roles of cooperating teacher and mentor. The Educational Forum, 66(4), 380–385.

Goldhaber, D. (2019). Evidence-based teacher preparation: Policy context and what we know. Journal of Teacher Education70(2), 90–101. https://doi.org/10.1177/0022487118800712

Goldhaber, D., & Özek, U. (2019). How much should we rely on student test achievement as a measure of success? Educational Researcher48(7), 479–483.

Goldhaber, D. & Ronfeldt, M. (2018). Toward causal evidence on effective teacher preparation. In J. S. Carinci, S. Meyer, & C. Jackson (Eds.), Linking teacher preparation program design and implementation to outcomes for teachers and students. Charlotte, NC: Information Age Publishing.

Goldhaber, D., Krieg, J., & Theobald, R. (2018). Effective like me? Does having a more productive mentor improve the productivity of mentees? (CALDER Working Paper No. 208-1118-1).

Goldhaber, D., Krieg, J. M., & Theobald, R. (2017). Does the match matter? Exploring whether student teaching experiences affect teacher effectiveness. American Educational Research Journal, 54(2), 325–359.

Goldhaber, D., Krieg, J., Naito, N., & Theobald, R. (2019). Making the most of student teaching: The importance of mentors and scope for change. (CALDER Research Brief). Washington, DC: National Center for Analysis of Longitudinal Data in Education Research. CALDER Policy Brief No. 15-0519.

Goodlad, J. I. (1990). Teachers for our nation’s schools. San Francisco: Jossey-Bass.

Grossman, P. (2010). Learning to practice: The design of clinical experience in teacher preparation. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/citations?doi=10.1.1.178.4088

Hutt, E. L., Gottlieb, J., & Cohen, J. J. (2018). Diffusion in a vacuum: edTPA, legitimacy, and the rhetoric of teacher professionalization. Teaching and Teacher Education, 69, 52–61. http://doi.org/10.1016/j.tate.2017.09.014

Hannan, M. Q., Hamilton, L. S., & Kaufman, J. H. (2019). Raising the bar for teacher preparation. Retrieved from https://www.rand.org/content/dam/rand/pubs/research_reports/RR2300/RR2303z3/RAND_RR2303z3.pdf

Matsko, K. K., Ronfeldt, M., Nolan, H. G., Klugman, J., Reininger, M., & Brockman, S. L. (2018). Cooperating teacher as model and coach: What leads to student teachers’ perceptions of preparedness? Journal of Teacher Education, 0022487118791992.

Papay, J. P., West, M. R., Fullerton, J. B., & Kane, T. J. (2012). Does an urban teacher residency increase student achievement? Early evidence From Boston. Educational Evaluation and Policy Analysis34(4), 413–434. https://doi.org/10.3102/0162373712454328

Ronfeldt, M. (2012). Where should student teachers learn to teach? Effects of field placement school characteristics on teacher retention and effectiveness. Educational Evaluation and Policy Analysis34(1), 3–26. https://doi.org/10.3102/0162373711420865

Ronfeldt, M. (2015). Field placement schools and instructional effectiveness. Journal of Teacher Education66(4), 304–320. https://doi.org/10.1177/0022487115592463

Ronfeldt, M., & Reininger, M. (2012). More or better student teaching? Teaching and Teacher Education, 28(8), 1091–1106.

Ronfeldt, M., Brockman, S., & Campbell, S. (2018). Does cooperating teachers’ instructional effectiveness improve preservice teachers’ future performance? Educational Researcher. Advance online publication. doi:10.3102/0013189X18782906

Ronfeldt, M., Reininger, M., & Kwok, A. (2013). Recruitment or preparation? Investigating the effects of teacher characteristics and student teaching. Journal of Teacher Education64(4), 319–337. https://doi.org/10.1177/0022487113488143

Ronfeldt, M., Schwartz, N., & Jacob, B. (2014). Does preservice preparation matter? Examining old questions in new ways. Teachers College Record, 116(10), 1–46

St. John, E., Goldhaber, D., Krieg, J., & Theobald, Roddy. (2018). How the match gets made: Exploring student teacher placements across teacher education programs, districts, and schools (CALDER Working Paper No. 204-1018-1).

Zeichner, K. (2010). New epistemologies in teacher education. Rethinking the connections between campus courses and practical experiences in teacher education at the university. Interuniversity Journal of Teacher Education, 68 (24.2), 123–150.

[1] This brief is focused only on teacher clinical experiences. For information on teacher preparation more generally, see http://caldercouncil.org/re-framing-the-discussion-about-teacher-education/#.XNL6oC_Mx-U

[2] A lack of lack of agreement exists on how to measure teacher quality, especially in terms of performance, which in turn creates challenges for understanding whether specific clinical experiences affect quality. For example, although one option is to use value-added scores based on test score performance, practitioners continue to have both technical concerns as well as what are best characterized as “I don’t believe it” concerns.

[3] In contrast, a large body of qualitative research exists (e.g. Darling-Hammond, 2000; Goodlad, 1990).

[4] We use the term teacher effectiveness synonymously with value-added as a measure of teacher contributions to student test achievement. Note that we are describing the connection between clinical practice and later student achievement. Research also exists on whether clinical practice has an impact on student achievement in the classrooms in which student teaching is occurring (e.g., Goldhaber et al., 2018).

[5] Note that literature also exists on how various aspects of student teaching are related to the perceptions of teachers in the field. Ronfedt and Reiniinger (2012), for instance, found little relationship between the length of student teaching and feelings of instructional preparedness. See also Matsko et al. (2018) and Ronfeldt et al. (2013).

[6] The test score impact is estimated to be .04 to .10 standard deviations of student test achievement.

[7] This is measured either by survey-based measures about school culture and collegiality among teachers or based on the nonretirement attrition rate (referred to as the “stay ratio”). Ronfeldt found that the stay ratio is correlated with the survey-based collegiality and culture measures.

[8] This new body of research is in line with findings from the Boyd et al. study from about a decade earlier that found that teachers from TPPs that required them to complete a capstone project, often a portfolio put together while student teaching, were more effective.

[9] Teacher candidates who perform one standard deviation better on the CAP during their student teaching placement are found to perform about 0.15 standard deviations better on the state’s assessment as first-year teachers.

[10] Goldhaber et al. (2019) found significant scope exists for change in which teachers serve as mentors; only about 3 percent of teachers mentor in a given year, many of whom are not highly effective, so large numbers of highly effective teachers do not serve as mentors.

[11] For more background on the TPP reforms in Louisiana, see Hannan et al., 2019. Note, however, that the Louisiana residency differs in key ways from some program-based residency models, such as the Seattle Teacher Residency and the Boston Teacher Residency. Louisiana’s model is statewide and less place-based in the sense that some school district residency models have explicit incentives for teacher candidates to stay in the locality in which they are doing their residency. Also, the school district residency models offer much higher stipends than the $2,000 provided (by the state) to teacher candidates in Louisiana. In Seattle, for instance, the residency pays teacher candidates $15,000 and, in Boston, candidates receive about $14,000. That said, these program-based residency models of teacher preparation are also a good example of the interest in expanded clinical practice opportunities. Although some research exists on the efficacy of residencies (e.g., Papay et al., 2012), it is not clear whether what might be seen as a residency effect has to do specifically with the amount of clinical practice teacher candidates receive given that program-based teacher residencies differ from traditional teacher preparation in more ways than just the extended time in clinical practice.

[12] Note that the references in the prior section about what we know about length of internships are from evidence based on the variation between programs in internship length, not the implementation of a new program like Louisiana’s.

[13] See for example Michigan’s clinical experience requirements and rationale for those requirements: https://www.michigan.gov/documents/mde/Clinical_Experiences_Requirements_648342_7.pdf