TeacherSquared
TeacherSq-Jan17-080.jpg

Balancing the Patience of Researchers with the Urgency of Practitioners

Balancing the Patience of Researchers with the Urgency of Practitioners

 

Balancing the Patience of Researchers with the Urgency of Practitioners

by Liam Honigsberg
December 2017

How do we know if a change will lead to improvement? Researchers say you can’t try it until it’s been proven, while practitioners will say you can’t prove it until it’s been tried. At TeacherSquared, we’re learning how to embrace both the precision and patience of a researcher and the daring and urgency of a practitioner, all in order to help teacher preparation programs get better, faster. 

 Source: https://www.welcoa.org/wp/wp-content/uploads/2016/11/blog-collaboration.png

Source: https://www.welcoa.org/wp/wp-content/uploads/2016/11/blog-collaboration.png

Baseline Metrics as a Starting Point

Teacher preparation programs (TPPs) often struggle to collect and access the key performance data needed for continuous improvement. Last year’s Deans for Impact report highlighted this problem, revealing that only 35% of their member-led TPPs have long-term retention data for their graduates, and only 26% had access to other essential data held by state and local education agencies (like observation scores and student achievement results). A report from The New Teacher Project describes the complexities of state and local education agency infrastructures that presently hinder TPPs’ efforts to better understand their own effectiveness. It is clear that the path forward for advancing these partnerships and sharing this data will take time and continued engagement of salient stakeholders.

Rather than wait patiently for external data partnerships to develop, at TeacherSquared we partnered internally with five TPPs to collect, compile, and report performance data on a shared data dashboard. TeacherSquared is a technical assistance center that creates opportunities for educators to improve themselves, their institutions, and the field of teacher preparation through convenings and the open sharing of knowledge and data within a trusted community. We created a shared data dashboard for our TPPs because they consistently expressed a desire to better understand their own performance relative to other TPPs. The full dashboard shares data on multiple measures, including (but not limited to): annual teacher candidate (TC) retention, TC satisfaction, TC overall completion rates, and TC teaching observations. Notably, our attrition dashboard [redacted], disaggregated by demographic subgroups, sparked important discussions and action planning among our group that warranted further engagement.

We created a shared data dashboard in only six months by capitalizing on our position as a technical assistance provider to multiple TPPs, and by adhering to the following principles:

  • Solidarity: TPPs collectively placed the greatest premium on the needs of K-12 students, and were willing to be vulnerable and transparent as TPPs in the best interest of kids.
  • Parsimony: Not all data should be shared. Not all data is useful. Not all data is easily comparable. TPPs agreed on a maximum of ten metrics to collect and share.
  • Bottom-up: TPPs, not TeacherSquared, decided what data they should share and compare. We led TPPs through a consensus-building exercise but we didn’t dictate.
  • Uniformity: We used a singular methodology for computing metrics, we used identical language for survey questions, and we took the time needed to agree on all of it.
  • Build (don’t destroy): We encouraged connections, community, and fun. We sent out monthly Digests (see this example) to synthesize decisions and to increase engagement.
  • Compare (don’t despair): We agreed to collect only those data against which we can seek to constructively improve. The purpose for this partnership is formative, not evaluative.
IMG_0072.JPG

Developing a Coherent Problem Statement

Creating the shared data dashboard was an important first step toward understanding our current state, but merely reflecting on data does not yield improvement. As we reviewed our data together, we discovered that each institution held different priorities and different concerns with the results. Sometimes, comparing the data seemed to cause more confusion than clarity. For example, although the attrition data varied across TPPs, we still couldn’t cohere on an ideal rate of attrition that we should all strive to meet, and thus there was seemingly no benefit to knowing another institution’s overall attrition rate. We had generated a comparable set of data, but if we were to move forward as a group, we still needed the collective will and alignment to address a common problem grounded in that data.

It was through the disaggregate analysis of our attrition data that we developed alignment on a common problem that we all sought to address together. As mentioned, each institution held a different perspective on what rates of overall attrition are good or bad (since some attrition is arguably good and/or necessary). However, when sharing disaggregate attrition data by demographic subgroups, the group discovered that many institutions were facing similar disparate attrition rates by gender and race/ethnicity. These patterns were often correlated with other programmatic performance measures, or “leading indicators” for attrition. Within our group, there was overwhelming consensus that disparate attrition rates are a problem of great importance and a problem worth tackling. Regardless of the overall attrition rate, rates of subgroup attrition should not be statistically significantly different from that overall rate.

IMG_0077.JPG

After identifying a common problem for which we had baseline data, we then realized that there was no playbook for what to go do about it. How do you ensure that teacher candidates of all races/ethnicities and genders are advancing into the teaching force with equal rates of success? There is considerable research that addresses issues of persistence for teachers already in the field, but far less research about the experience of teachers-in-training. Herein lies the point of tension between researcher and practitioner, where the researchers are unclear about the way forward but the practitioners are eager to make immediate progress. In our case, we believe the solution and balance lies in the Science of Improvement.

Launching Phase 0 of a Retention Improvement Network

In early 2018, TeacherSquared will launch Phase 0 of a Retention Improvement Network, in which we will collectively pilot and study new initiatives that hold promise for improving retention. The “Phase 0” designation deliberately emphasizes that we are honoring the researcher’s perspective that we are indeed lacking a proven, evidence-base for how to improve retention of teacher candidates, but we are also honoring the practitioner’s perspective that we nonetheless must proceed to try to figure it out, starting now.

The Phase 0 Network will include practitioners and researchers in different roles across different institutions and will employ the science of improvement in order to better understand what bolsters retention, particularly for underrepresented teacher candidates who are often retained at lower rates than their peers. Already, we have testable ideas on the table that include offering structured support to prevent burnout, providing affinity group opportunities, delivering better classroom management training, and reducing programmatic costs. Participants in the Phase 0 group will benefit from their own learning as they test ideas in their own context, and will also benefit from the learning of other group members who are testing different initiatives.

 Source: http://www.ihi.org/resources/Pages/HowtoImprove/default.aspx

Source: http://www.ihi.org/resources/Pages/HowtoImprove/default.aspx

The Model for Improvement was explained to me with a catchy data analogy: When you start dating somebody, you start low-risk. You go to coffee or meet for lunch. If that goes well, you might go to dinner. If that goes well, you might even find yourself meeting the friends or taking a vacation together. Great ideas for improvement should be treated just the same. Try them out, in a low-risk context and on a small scale. Rapidly test those ideas, learn from them, and consider whether you want to dump that idea, or refine it and test it again on an even larger scale. If that still goes well, let those great ideas meet the parents or spend the weekend in Paris. Or, in the best-case scenario, tie the knot and make it official: it’s a great idea, and we want the world to know! And so the researcher and practitioner live happily ever after.

Speaking of great ideas, we’re eager to hear about any great ideas you have tested (or are currently testing) to improve the retention of teacher candidates, or if you or another organization you know might be interested in joining our Phase 0 Network. One of the great joys of this work is the continued affirmation that our TeacherSquared slogan really does hold true: together, we can improve faster than working alone.


About the Author

Liam_headshot.jpg

Liam Honigsberg
Director of Effectiveness Initiatives
TeacherSquared

Liam Honigsberg is the Director of Effectiveness Initiatives at TeacherSquared, a national innovation center, where he measures and accelerates efforts to improve the quality of teacher preparation. His prior work includes leading the development of curriculum to teach data literacy to teachers-in-training at Relay Graduate School of Education, managing data and statistical analysis for the Tripod Project housed at the Harvard Kennedy School, and teaching high school mathematics in Phoenix, Arizona. He has a Bachelor's Degree in Cognitive Neuroscience from UC Berkeley, a Master's Degree in Statistics from Harvard University, and is currently studying the power dynamics of performance management as a doctoral candidate at Brandeis University.

 

Stay Informed

Sign up to receive TeacherSquared blog and resource updates.

Name *
Name