Learning Paper

Using Data to Improve Teacher Preparation

Published: October 2015

In 2009, the Bush Foundation launched its ten-year Teacher Effectiveness Initiative. It was based on the theory that better teacher preparation programs would lead to more effective teachers and, in turn, improved student outcomes. Fourteen higher education partner institutions, and their teacher preparation programs, were at the Initiative’s core; together they committed to train and place 25,000 new, effective teachers** across Minnesota, North Dakota, South Dakota and the 23 Native nations that share that geography. To do this, the partners agreed to transform their approach around four key pillars—recruitment, preparation, employment and support. The Initiative also paid close attention to how partners would develop and use measurement systems to improve training strategies, and how they would understand the effectiveness of their graduates as new teachers. This learning paper examines these measurement strategies.

The 14 Next Institutions
The 14 Next Institutions

Measurement Strategies Overview

The Foundation deployed a number of strategies to increase the data capacity of the TPPs, and to build and better connect measurement systems to use in continuously improving their programs. With these approaches, the Foundation sought to foster a highly collaborative environment for the partners to drive their own strategies. Approaches included:

  • INCREASING PARTNER DATA CAPACITY/LITERACY
    • Data collection and measurement capacities initially varied across individual Initiative partners. The Foundation offered specialized technical assistance and coaching from a network of consulting organizations. Additionally, each partner received grants and participated in convenings that were uniquely designed to improve data management systems and support increased staff capacity and appreciation for the use of data at the partner institutions.
    • DEVELOPING THE COMMON METRICS
      • Initiative partners formed a working group to figure out how to collect and share program improvement data. They developed a shared system of four surveys aimed at gathering critical data from and about teaching program participants. Today the surveys are administered to individuals at three points: as they enter a teacher preparation program, as they complete their training and after they have completed their first year as a new teacher. A fourth survey is administered to each new teacher’s supervisor after his or her first year of teaching. These surveys have given rise to the Common Metrics: a robust data collection and analysis system used to collect, analyze, aggregate and compare data across all Initiative partners.
    • SUPPORTING THE EDTPA ROLLOUT
      • Through a grant to the Minnesota Association of Colleges of Teacher Education, the Foundation helped establish statewide implementation of the edTPA in Minnesota. The edTPA was developed at Stanford University to track and measure specific important skills teacher candidates need to acquire throughout their practice. Today, Initiative partners use the edTPA to better evaluate and improve their teacher preparation program participants in their practice. The edTPA is also used to identify and improve specific areas in which the teacher preparation program may need to improve their curricula.
    • LINKING TEACHER PREPARATION TO TEACHER EFFECTIVENESS
      • As data systems were being built to drive teacher preparation program improvement, another aspect of the work centered on answering the question, “How do we know if new teachers are performing successfully in the field after they graduate from one of our preparation programs?” Value-added measurement, an emerging model of teacher evaluation that was growing in popularity across the country, was intended to be the core tool for assessing student progress. If it worked, the value-added system would provide critical data about the effectiveness of teacher preparation program graduates—as well to understand the overall impact of the Initiative.

What is Working Well

Requiring Initiative partners to participate in data practices:

Each higher education partner committed to collect and share data. These commitments, formalized in Memorandums of Understanding signed by the presidents of each college or university, helped establish trust and transparency, and promoted collaboration across the partners (“we’re all in it together”). The Foundation provided funding to help each institution advance its commitment to using data, provided technical assistance from expert coaches and provided a continuing forum where institutions could work together and learn from one another.

Having a growth versus “gotcha” mindset:

The Foundation and Initiative partners agreed to treat all data and learning as valuable assets for improvement—not as tools to identify and report weaknesses. Through open conversations, the partners have also helped one another improve data practices, shape best practices and build tools and resources.

Building the Common Metrics:

The Common Metrics provided the innovative data collection and analysis systems the Initiative partners had hoped for. While Initiative partners found value in the baseline data, they especially valued the opportunity to compare their own institutional data to the collective aggregate. Doing so gave partners the opportunity to see where they were on multiple aspects of teacher preparation.

The collaborative process that created the Common Metrics gave Initiative partners a new platform—especially as the Metrics kept pace with rising accreditation and government reporting standards. Since then, Initiative partners have been regularly called on to present at national teacher preparation conferences, and to advise other colleagues who are engaged in similar improvement efforts.

Enabling Initiative partners to guide their own work:

Once the Initiative’s collaborative framework was established, partners determined their own direction, vision and workflow. This was especially evident throughout the work on designing the Common Metrics. The Foundation supported this work through providing consultants to build capacity, facilitate processes and offer feedback.

Using convenings to improve and explore:

Data-focused convenings have taken a variety of shapes, and have been held for a variety of purposes across the field. Examples of convenings used specifically in the Initiative include higher education and K-12 schools gatherings to discuss data-sharing opportunities; and faculty or staff gatherings held to analyze and apply teacher preparation program improvement data. Initiative partners have shared their own learning and best practices during annual Initiative summits and at multiple Annual Meetings of the American Association of Colleges of Teacher Education.

Challenges

Gaining faculty trust and support for measurement and collaboration:

Not all faculty at partner institutions were ready to embrace an entirely new data management approach. This made it more challenging to gain momentum, which delayed the Initiative’s overall progress. The result is that some partners are further ahead in their work than others. Data sharing and collaboration at this depth is also unusual in the world of higher education and building trust required an extended timeline.

Increasing data capacity/literacy efficiently:

Although partners gathered data before the Initiative began, there was a wide range in their ability to attend to its quality, analyze its meaning or apply it to program improvement. Most partners did not have a dedicated data specialist or management system to adequately serve their needs. Each institution’s capacity and level of data literacy was learned over time. Perhaps having an earlier understanding of each site’s abilities with data may have led to earlier, more direct supports to bring them up to speed. 

Understanding performance of teachers based on student achievement:

The Foundation engaged an expert early on to build a value-added measurement system. The system would capture how teacher preparation program graduates were performing in comparison to their peers across the field. The idea was to use insights from the system to learn about what difference the Initiative was making.

This was a bold idea—and a big risk for the Foundation. While it was growing in popularity, value-added measurement was complex, expensive and still in its early stages. While value-added calculations could be performed, K-12 and higher education data systems were not compatible or connected enough to link the teachers’ data back to the teacher preparation programs from which they graduated. Furthermore, some critics noted, for example, that value-added measurement was only a single snapshot that did not take into account other important qualities of teaching and learning. After investing a great deal of time and resources, Initiative partners agreed that a value-added system was not the right solution and the Foundation decided to phase out the effort.

Wish List

The Initiative has given us a deeper understanding of how the education field uses data collection and sharing methods. This complex work encountered barriers caused by practices and perceptions which, if changed, could lead to greater possibilities for the broader education ecosystem. Examples of ways to address these barriers follow.

Change attitudes around data and measurement:

Strong measurement and evaluation practices were often missing from teacher preparation programs—especially ones with smaller numbers of faculty. This was likely due to a number of reasons including concerns with data privacy, validity and reliability. Nearly seven years later, the Initiative has fostered a real shift in attitudes toward measurement practices. Conversations across partners now focus on how these systems can spark innovation and creative problem-solving.

Create better alignment between teacher preparation and evaluation metrics:

Although the edTPA has started to bring better practice-focused measures to teacher preparation, there is still a long way to go to connect and align how teacher candidates are evaluated during their practice with how teachers are evaluated in the field. Creating this alignment may allow for smoother transitions between teacher preparation and K-12 employment, as well as for K-12 to be able to target their hiring efforts towards candidates that will meet their particular evaluation standards.

Seek policy changes that would improve the connectedness and use of data systems:

While robust data systems are bring developed, or already in place, across the Bush Foundation region, policies may not be in place to connect or use these systems to their greatest potential. For example, a policy requiring districts to report their teacher evaluation data to the state would allow teacher preparation programs to follow their graduates into teaching and to understand how well they are doing, both overall and at particular aspects of delivering instruction.

Consider a more broadly accepted definition of teacher performance:

Sharing data between K-12 and teacher preparation programs could be made easier if teacher evaluation metrics didn’t exist in so many different forms and formats. For example, common definitions across states would allow for greater continuity in performance tracking and professional development for teachers who have worked in multiple states. It would also make it easier for teacher preparation programs to link to K-12 evaluation systems and track effectiveness of their graduates, who often land in hundreds of different schools across multiple states.

Explore ways to shape and spread best practices and innovation in measurement:

The partners use of coaching and convenings opened dialogues around data best practices. It had the added value of easing apprehension around collaboration between traditionally siloed, competitive institutions. The Initiative’s collaborative spirit promotes innovation and problem-solving across the region.

Increase human capital and leadership in the field of education data:

The Foundation has worked with some of the best thinkers and practitioners in measurement and evaluation to advance the Initiative. Generally, the field could use more practitioners and leaders in this space, so that education institutions have equitable access to the expertise they need to measure and improve their programs effectively.

Conclusion

We are thrilled by the Initiative’s early progress, and we see the great potential of what may emerge in upcoming years. After encountering what seemed, at times, like endless barriers in this work, we learned that great patience and perseverance can lead to progress. However, there is still a long way to go in building up data systems to their potential. By making strides toward our wish list above, we believe stronger teacher preparation programs and K-12 data measurement systems will offer new opportunities for breakthroughs.

Our 14 Initiative partners continue to improve the data systems that inform their work, and they are committed to their ongoing collaboration. Starting in the fall of 2015, the partners will build a mechanism to assess the impact of teachers who graduated from their respective preparation programs. If successful this work will contribute to the teacher preparation field—as well as help the Foundation understand how on track its original theory of change was.

**The original goal of producing 25,000 was discontinued due to a number of factors—see the midterm evaluation learning paper.