TeacherSquared
IMG_7989.JPG

Buzzwords, Catchphrases, and DCI

Buzzwords, Catchphrases, and Data for Continuous Improvement

 

Buzzwords, Catchphrases, and Data for Continuous Improvement

by Liam Honigsberg
November 2017

As the Director of Effectiveness Initiatives at TeacherSquared, I am as guilty as anybody of using buzzwords and overly technical language to describe our work. “We measure our success via a targeted set of outputs and outcomes associated with our inputs and activities.” “We engage in communities of practice, we participate in working groups, and we launch networked improvement communities.” “We use data feedback cycles to inform our decision-making, to steer our improvement initiatives, and to drive our strategic planning.” These statements are all true, but they abbreviate what we are really doing and leave a lot of room for further explanation.

 Source: http://www.good-citizen.org/wp-content/uploads/2014/11/networking.jpg

Source: http://www.good-citizen.org/wp-content/uploads/2014/11/networking.jpg

This year, through an opportunity to lead a Data for Continuous Improvement Working Group, I learned an important lesson on how buzzwords can prove a real liability to making progress. The phrase “Data for Continuous Improvement” described a common sentiment of the group’s membership. Generally speaking, we wanted to band together and share information in an effort to get better. We were motivated to join the group, in part, by a sense of curiosity about how others were performing, and a sense of promise of how we might capitalize on each other’s best practices. However, from the outset the word “data” connoted too narrow a meaning, and the word “improvement” too loose a meaning, such that we were unable to successfully use “data” to make any “improvements” until we made certain discoveries. Looking back, we should have named our working group something else.

This particular Data for Continuous Improvement Working Group included six technical assistance (TA) centers working toward a common aim of advancing teacher preparation in this country. The term “technical assistance” is difficult to define but generally refers to a partnership wherein an outside entity offers “technical assistance” (i.e. assistance that is not funding-driven) in order to strengthen the performance of a target site. In our case, the targeted changes should take place within a teacher preparation program, and it is our job as TA centers to catalyze their advancement. All of our Centers are supported by a common grant, and all of us are providing technical assistance to teacher preparation programs in a shared, but distinct, context. So what data would tell us whether we’re doing our job well, and how to do it better?

The problem we encountered early on was that many of us were struggling to find the right data that we could share and compare in order to make meaningful programmatic changes to how we delivered technical assistance. One main reason for that was the lack of clarification, or perhaps lack of imagination, about what the word “data” could actually represent, and how it might yield opportunities for “improvement.” We initially attempted to see “data” in the conventional sense, as a spreadsheet filled with cells representing approximations of key information which are usually performance-related. As an example, we administered a variety of stakeholder surveys, and then we shared and reflected on that information across TA centers. That data were interesting, and certainly we were curious about how one another had performed, but comparing the data didn’t provide any opportunity for improvement. First of all, the perception data was consistently favorable so there were few examples of positive deviance. More importantly, our respective contexts and our pathways to partnerships with our sites were too distinct to make precise claims of how we were performing relative to one another. Overall, the data did not present a compelling case for adapting practices of other Centers that would lead to improvement. 

TeacherSq-Jan17-138.jpg

A major turning point in our group occurred when we began to talk about data above and beyond perception survey responses, or K-12 value-added achievement scores, or aggregate teacher candidate observation results. The real shift took place when we thought of data as evidence - information, artifacts, feedback, insight - data that tell us to what extent what we’re doing is working, and how we might increase the effectiveness of how we provide technical assistance. What we’ve discovered is that, in order to use data for continuous improvement in this context, we first need a compendium of evidence that speaks to the degree to which our support initiatives are being taken up, with fidelity, in a way that results in meaningful changes to the experiences of teacher candidates inside teacher preparation programs. Meaning, for us, “data” is well beyond the survey responses (quantitative or qualitative) in which somebody says how much they like our work and how excited they are to partner with us. That’s certainly essential to drive meaningful change from outside of an institution, but to make improvements to our own activities we first need evidence to know the degree to which our technical assistance has been implemented.

TeacherSq-Jan17-142.jpg

As the Working Group moved forward in thinking about evidence of implementation, we’ve started exploring questions about when or why we would collect that evidence, and we’ve acknowledged just how expensive it can be to acquire. To know, for example, how a week-long professional development for teacher educators (held in the summer) has impacted the practice of those teacher educators (during the school year) is a resource-heavy undertaking. More importantly, it’s an undertaking that is only worth doing if it’s for the purpose of continuous improvement. If it’s an intellectual exercise, or merely an opportunity to celebrate our performance, it’s not worth doing. Our newest working name is “The Measurement for Technical Assistance Quality” Working Group. It’s not a perfect title, but it’s definitely a step in the right direction.

In a future blog post, we will unpack this document that explains how TeacherSquared is piloting an approach before, during, and after training institutes to garner evidence that will help us make those institutes even better. Just as a classroom teacher ought hold himself accountable for engaging students and ensuring their learning, we believe strongly that we are accountable to making our trainings matter for our participants. We want to use data to continuously improve, and that can only take place if we have the necessary evidence to critically analyze the effectiveness of what we’re doing. That’s more than just a bunch of buzzwords!


About the Author

Liam_headshot.jpg

Liam Honigsberg
Director of Effectiveness Initiatives
TeacherSquared

Liam Honigsberg is the Director of Effectiveness Initiatives at TeacherSquared, a national innovation center, where he measures and accelerates efforts to improve the quality of teacher preparation. His prior work includes leading the development of curriculum to teach data literacy to teachers-in-training at Relay Graduate School of Education, managing data and statistical analysis for the Tripod Project housed at the Harvard Kennedy School, and teaching high school mathematics in Phoenix, Arizona. He has a Bachelor's Degree in Cognitive Neuroscience from UC Berkeley, a Master's Degree in Statistics from Harvard University, and is currently studying the power dynamics of performance management as a doctoral candidate at Brandeis University.

 

Stay Informed

Sign up to receive TeacherSquared blog and resource updates.

Name *
Name