Episode 57

Measuring Impact in K-12 Education: A Conversation with Shelby Danks

Published on: 7th December, 2023

Welcome to another episode of All Things Marketing and Education! We had the pleasure of sitting down with Shelby Danks, Ph.D., founder and principal advisor at ARKEN RESEARCH, who has the unique ability to make complex topics like research, evaluation, and efficacy not only approachable but also inspiring. Shelby delves into the crucial role of efficacy studies in K-12 education, unraveling misconceptions about evaluation and impact. We touch on practical strategies for brands to measure their impact effectively while ensuring it's a manageable process. We also explore why understanding and engaging with these studies is vital from an educator’s perspective.

If you've ever wondered about the metrics and data behind student outcomes, especially with the influence of ESSA funds, this episode is a must-listen. Join us as we navigate through the intricacies of evidence in K-12 education and learn from Shelby's insights.

Transcript

Elana Leoni:

Hello and welcome to all Things Marketing And Education. My name is Elana Leoni, and I've devoted my career to helping education brands build their brand awareness and engagement. Each week I sit down with educators, EdTech entrepreneurs, and experts in educational marketing and community-building. All of them will share their successes and failures using social media, inbound marketing, or content marketing, and community-building. I'm excited to guide you on your journey to transform your marketing efforts into something that provides consistent value and ultimately improves the lives of your audience. Now let's jump right into today's episode.

Hi everyone, welcome to another episode of All Things Marketing and Education. In this episode, I sat down with Shelby Danks. Shelby is the founder and principal advisor and researcher at ARKEN RESEARCH. And we are going to go into what that means, and why that's important in this episode. I will tell you that she is a very smart and passionate human being. I first met Shelby at ISTE. For those of you that don't know that conference, it is the biggest EdTech conference in the US, and we had such a great inspiring conversation. I left it saying, "You have to come on the show." Her knowledge is amazing. I feel like the way she talks about research evaluation and efficacy makes it approachable, easy, and inspiring. So this is the episode you don't want to miss.

A little bit more about Shelby before we get into it. She has 10 plus years experience on all sorts of things research and evaluation. We'll link to her bio in the show notes, but she's done research and evaluation from all spectrums within K-12, specifically in maths, in reading, in K-12 brands and things like that. She is most recently the Chief Impact and Strategy Officer for Reading Horizon. And this episode, we're going to get into the role of efficacy studies in K-12 education. If you don't know what that is, we're going to explain it. I swear it'll be something that you'll be able to understand quickly, whether you're an educator or in EdTech. We're going to talk a little bit about the misconceptions about evaluation. And you know me, I like to bring it from strategy and high level into practicality. So we do want to be able to say, "What can we do right away as a brand if we want to start measuring our impact?" And it doesn't have to be so hard.

And from the educator perspective, "Why is it important? What should you look for?" So we're going to touch all of those things and more in this episode. Please stay tuned. I think this is an episode not to be missed. And Shelby, I could just talk to her for days and days. So please enjoy the episode. All right, welcome everyone to All Things Marketing and Education today. I am so excited to be sitting down with Shelby Banks. Shelby, thank you so much for coming on the show. I know it's a bit random, because we don't know each other as well, but I just had such a good feeling about you when we sat down at ISTE and we had this deep conversation, and I learned so much just in a short amount of time with you, that I couldn't help but open my mouth and say, "You need to come on the show." So I am glad you said yes to this crazy idea, and I am so excited about all the things that people are going to learn from you today.

Shelby Danks:

Thank you. You realize you're asking for it, Elana, when you ask someone to talk about data.

Elana Leoni:

niverse, the more, [inaudible:

Shelby Danks:

Thank you so much, we love geeking out. So this is a pleasure for sure.

Elana Leoni:

Yes, so Shelby, why don't you just start out and talk to me about why did you get into this field? I think there's a lot of people on our show, either they're EdTech professionals, or they're innovative educators maybe looking at career shifts, or moving up and down. I'd love to know, gosh, why education? And why specifically this field of research and expertise? What moved you to get there?

Shelby Danks:

Great question. I think a lot of people in our situation, they would say, "Well, life is roundabout," and no one grows up thinking, "I want to be an education researcher when I grow up, because I just have to uncover what works for whom, and under what conditions, man." No one says that going into their career. But ultimately I started off in education. I was an educator, teacher taught elementary school and junior high math for many years. And then worked in staff development, and did a lot of new teacher coaching and helping with principal-lead continuous improvement.

Then temporarily left education, went to healthcare to do strategic planning for about three years, and then one year at IBM. And then came to my senses and realized, "I missed those kids, I missed my kids." So I came back as, at that point I had my PhD, and I started working as a managing researcher. Really just kind of gravitated toward how do we support and empower educators to better and more meaningfully use data. At the time I was naive and thought that meant outcome data. Over time I've gotten a lot more astute in recognizing that we really need to be a lot more meaningful with how we select and choose measures, really evaluate impact. So it's the roundabout journey.

Elana Leoni:

what my colleague [inaudible:

Shelby Danks:

Good deal. Well, I think we're seeing a huge uptick. We're definitely seeing an increase in demand from districts and schools, asking, "What is the evidence of your solution?" We've seen that for some time now in the curriculum industry, for a long time, even ESSA had a standard that expected education organizations to say, "This is the impact of our curriculum on student outcomes." And so now of course the Department of Education, the Education Technology Department has put out additional guidance and evidence for education technology companies to think about how they're evaluating the impact of what they're doing in classrooms. So we're seeing a lot of demand for that.

Interestingly enough, while all districts are increasingly asking for more evidence of impact, they're always not so willing to participate in the generation of data and evidence. So because of that, we've seen a lot more collaborative efforts to really partner with and engage and collaborate with K-12 districts in a meaningful way, so that they can have a huge voice in what type of things we're researching. They can play a part in how we organize and design the studies, and then they can get access to real-time data throughout the study, so that they're not waiting until the end of the year to get some report.

So by involving districts in the design and the implementation of research, we're actually noticing that researchers, interestingly enough, are having greater impact, because we're actually more involved in the weeds of the work. So it's a lot of fun. Those research practitioner partnerships have really increased in demand. And then ultimately we're still seeing a huge demand, especially on the EdTech side for, "What is the evidence of your thingy?" And we see that our consumers, basically districts and schools, they vary in their level of sophistication of understanding what they even mean by, "Evidence." Sometimes they're asking for, "What is the evidence base to inform the design of your thingy?" And sometimes they ask, "What is the impact of your thingy?" So really trying to understand what they need and when they need it is really critical in our work.

Elana Leoni:

Okay. You said some good things in all of this. So on a high level, evidence played a critical role in K-12 education, it always has. But what I heard you say, is bring up ESSA and then also with the influx of the ESSA funds too, of is there something to show that there is some type of, whether it be student outcomes, which we use so generically in umbrella terms, it's like, "Student outcomes, we improve ..." And go to any EdTech site and you'll do that phrase. But are there also key metrics that we will measure and look for, and have studies set up around that too? I think maybe we will dive in a little bit deeper on that, but I want to jump back into a little bit of the metrics and the data that you're talking about too?

So let's start about student outcomes again. So you talked about the importance of efficacy, and the role of research in EdTech in particular. What are people looking for? And does it depend on the type of product or service in general? Or can you make some generalizations on the type of data that we should be tracking?

Shelby Danks:

o that, and that's [inaudible:

And so one of the things I often share, is that most EdTech organizations are on a collision course with the idea of, "Implementation integrity." So because of the role of EdTech, where they sit in that ecosystem, and Alnoor Ebrahim, he put out a great book a few years ago, called Measuring Social Impact. In that book he talked about, depending on where your role is in this ecosystem, it really impacts what you should be measuring as your outcome. For example, if you have a low level of certainty about cause and effect, and you have a low level of direct impact on the outcomes that you're trying to influence, then you need to be measuring more proximal outcomes, and you should measure outcomes by your influence on the system. So that's a lot of gobbledygook basically for saying that ultimately the measure of impact of any educational technology is the implementation integrity, that is the most proximal and really meaningful measure.

So instead of just doing one big study, and I think a lot of our efficacy research is like this, we do one big study where we look at, "What is the impact of my curriculum, PL, or solution on outcomes?" It really benefits us to break that into two studies, to really understand not only what works, but for who and under what condition? So for example, what is the impact of using this curriculum ,or this enabling technology, on teacher implementation? That's study number one. A separate study is once we have a lot of implementation data, we can say, "What is the relationship between implementation and outcomes?"

By breaking that into two separate studies, what we've done, is we've, number one, answered more than just the question of what works, we understand for whom and under what conditions. And number two, and this is my favorite, we've generated a ton of free user experience research, because a lot of people, they separately will go do UX research. Especially in the early phases when they say, "Well, we want to know the impact," but they don't really have a strong theoretical model yet as to how their solution should work. It's non-prescriptive, it's like a teacher could use it in the variety of ways.

So this is actually a really good way for them to better understand how it should look. It just generates a lot of free user experience research. So instead of doing user experience research, hoping you get a good efficacy study. Instead if you invest in a good efficacy study and focus on implementation integrity, then you get a ton of free user experience research. It's super exciting.

Elana Leoni:

Yeah, I've actually never heard it. It's almost like you've reverse engineered it-

Shelby Danks:

[Inaudible:

Elana Leoni:

... into [inaudible:

When you were talking, you called it theoretical models. In my head I was trying to map out a theory of change, or a theory of action, where I was like ... And this is a little bit of my evaluation background, but I don't have much. But I was trying to visually go, "Okay, so what are our inputs that she's talking about? How directly correlated are they to the intended output?" And I'm thinking about that, right?

Shelby Danks:

I think so. And it's funny, I can't even keep up myself with all the different verbiage out there as to what we call the Theory Of Change or a logic model. It's like everybody has their own loose definitions, but ultimately that's right. So a lot of times when we work with a provider, they'll say, "We want an impact evaluation." So we'll start to ask questions about what is the theory, how does it work?

And what we find, is that it's actually a non-prescriptive model. So like you said, it could be used in a variety of ways. So until you actually have a little bit more idea as to how your solution should work instead of just how it could work, you're actually not really ready for an impact evaluation. So that's where I typically come in and say, "Let's do an exploratory evaluation that looks at what is the relationship between using your materials on implementation? Let's look at the varietals of implementation. Let's find out what seems to be working better for teachers and why. Then we can create a theoretical model." So implementation research usually ends up generating a better theoretical model than waiting until you do an impact study and you get an impact with an outcome, and it's a thumbs up/thumbs down scenario. So that's just a really fun way to better understand even the value of your solution as well.

Elana Leoni:

Great. So let's back up just for a hot minute, because sometimes we have, there's so many terminologies in EdTech in general, acronyms, but within the field of evaluation, research, efficacy. Can you break down, let's go into the definitions of, the difference of evaluation versus research? And then why don't we talk a little bit about the misconception of the field?

Shelby Danks:

Good deal. Well, I think it depends on who you ask the definition between research and evaluation. Because ultimately if it comes down to the purpose, what is the purpose of it? Is it to generate new knowledge? Or is it to evaluate that, what something is working? If it's to evaluate the extent to which something is working, then we typically call that evaluation, if it's working as intended. If it's to generate new knowledge that we necessarily haven't had an understanding of, or to test a theory or a hypothesis, we typically call it research. Oftentimes those activities can be done in one fell swoop, depending on how well you design both of those things. I think one of the major misconceptions about evaluation, is that it really can be done in a vacuum. So for example, there's an incredible organization that's nonprofit. They've been well-funded for a long time, and they have a really beautiful solution for how to better communicate with and engage with parents, and it's all tech enhanced.

, we call it, "The [inaudible:

grate our content, [inaudible:

Elana Leoni:

Okay, so pause there. You're probably going to get into so many other misconceptions, but you're saying that especially in EdTech, it's not just, "Because of A, there is B?" There's not this direct correlation because the ecosystem of K-12 is quite complex, and there's multiple stakeholders, parents, educators, para-educators, admins, that might be using the product or service. Plus there might be some other things in play. Like what type of curricula are they using? What type of environment of school are they using? What supports do they have? Is that what you're saying? So that you kind of have to look more holistically, and the misconception is, is sometimes it can be quite overly simplistic thinking, "Because of A, then there's B," is that what you're saying?

Shelby Danks:

Absolutely. A prime example of this, is right now we actually have two competing instructional models happening in the math and literacy world. So the Science Of Reading and the Science Of Math is going on in the industry, and the Science Of Reading, it would recommend highly direct instruction that's very explicit multisensory for students to learn. So a lot of direct instruction takes place using the Gradual Release Of Responsibility model. At the same time, in mathematics, we recommend a more problem-based approach, where we are reading through inquiry.

So the habits and practices and instructional routines a teacher has to have to be able to implement effectively the Science Of Reading principles in literacy curriculum, and then to turn around in mathematics and completely change their model and change their habits, is very complex. So for example, if I were going to evaluate the impact of a mathematics tutoring service to find out the impact of that, I would have to understand how my solution is interacting with all of those other factors to produce the outcomes that we're getting. And until we see those contextual factors come alive in an evaluation report, it's really hard to make a connection to my own context, if I'm in a different school district reading that report, to find out how likely am I going to be to receive similar results. So there's a lot of different ways in which different things that are happening at that campus can really drive outcomes in ways that are really hard to see, unless you're really getting in there and focusing, again, on implementation integrity.

Elana Leoni:

questions, in the [inaudible:

Shelby Danks:

Absolutely. That's a huge one. And then of course, just the idea that we should just, when we talk about efficacy, we're just focusing on student outcomes, as you mentioned before. So that's critical to focus on implementation for sure.

Elana Leoni:

Yeah. All right, so when I'm hearing all of this, I'm hearing the EdTech brand almost scream at me and go, "This sounds expensive, this sounds long, I don't have time. If I'm a CEO and just starting up, I'm doing 20 jobs, I have a very small runway. I know the importance of research, and this is why I got into EdTech is to make a difference, but how do I prioritize it?" Or maybe what would you recommend for all of those EdTech brands that don't have efficacy studies or data, and that might be intimidated by it?

Shelby Danks:

Great question. And it can be intimidating, because the way we write reports is super fancy. And sometimes even I can't make heads or tails of certain ways that they're put out. So it makes sense that there would be a fear of that. However, there are a lot of research firms that are out there, where they focus more on instructional models, they care more about how it's being implemented than just the numbers.

So I would say the first piece of advice certainly would be to, depending on where you are in your journey, the type of research that you're going to need is going to vary. Every research partner will specialize in a different part of that journey, to be honest. So for example, I have a lot of really great collaborators who are just masters at impact research. And typically, if I hear that they're interested in looking at the relationship between the use of their curriculum on outcomes, and they want to run a randomized control trial, or some type of heavy-handed quasi-experimental study, I will definitely refer them to someone who I would trust to be able to do that meaningfully. However, if they're earlier in their journey, and they're still trying to understand how their solution is being used with classrooms, then actually a lot of implementation of research can be done pretty simply.

rning coaches, and [inaudible:

Elana Leoni:

Great. And I'm going to follow up with you if you have any resources, or things to help any depending ... Like you said, there's different journeys within EdTech too. If there's any resources that come to mind, let's put them in the show notes to help people navigate there.

All right, so we talked about high level what the state of evaluation, what critical role it plays within education. And more and more it's not okay to just say, "Hey, it engages students, hey, it's fun." It's shiny-new-object tech syndrome. It has to be mapping to something that the district or the school has carefully laid out in their strategic plans. So we talked about all of that on a high level, what the role is. And then beginnings of how to get started in the misconception. But I think it's interesting, let's flip it around. So from the educator's perspective, they are looking at a lot of EdTech tools that sometimes say they have research. I'd love for you to poke holes, and help them, and say, what should they look for when they're evaluating programs and products, or reading secondhand evaluation? How do they make sure what's real and what's fake? What's watered down? I know that's a big question. But maybe you have a couple of tips of like, "Don't do this, see this. This is great."

Shelby Danks:

I love that question. Goodness, I hardly know where to start. There's so many different ideas here. There's two sides of me. There's the Jekyll and the Hyde. So the purest researcher inside of me, recognizing some of the more rigorous evaluation designs require a little bit more expertise, would say, definitely reach out to someone who can help you critically appraise and interpret a lot of these reports. So bring on an advisor, bring on a research partner that can at least, if they can't poke holes in some of those studies, at least can give you some advice of some of the limitations of those studies. And can help you understand what they can and can't tell you. So that's the short answer.

The other answer is, as researchers, we need to get better at effectively communicating exactly what is the most important thing to our solution. Ultimately that means more context in our reports. So one thing that I always encourage readers to look at, is to what extent are the researchers providing enough meaningful context of about the actual study that was done for you to be able to say, "Oh, this is similar to my situation, or different from my situation."

I think one of the biggest critiques that we often get is, "Well, that worked in that instance, but it will never work in my instance." Which on the one hand is not always true. I mean, there's certain things that will work no matter what. But it is a valid question to ask, "How does this study relate to my own context?" And so insofar as the researcher can provide more detail there, it makes it way more useful.

I actually have a little quick guide. In the past, I think I wrote a little blog post about what do you call+ it, good, better, and best, how to evaluate success stories that are typically put on the web. Anyone's welcome to go and look at that little resource as well, but gives a little bit of guidance to look for.

Elana Leoni:

Great. And we'll put that in the show notes as well, because I know we could do an entire episode just on certain things that you should look for, and what not.

Shelby Danks:

Oh my goodness.

Elana Leoni:

I know that we talked previously at times around the importance of educators testimonials. And making sure that you have a diverse range of them that meet, "Oh, this is a rural school, this is a public rural, this is a Title One, or urban." So being able to show the diversification, even with small qualitative things. Just simple, it doesn't have to be tied to a huge study, that this has been somebody's experience to bring them in.

I think that I sometimes look at research and evaluation as a way to scaffold too, like, what are the small things that maybe are baby steps on the way, you know what we don't think of, that they don't feel so intimidating to start. But what I hear from you, in general, what I love, is there are some people out there that you can partner with that can be somewhat affordable. And set you up with a structure so you know how to best position yourself in the future, and not make sure that you're leaving things on the table? So 10 years down the road, you're not scratching your head going, "Gosh, I really wish we captured this data, or we thought about it this way?"

Shelby Danks:

That's right. And I've seen organizations do it differently as well. So I worked for one curriculum organization who was led by some very strong academics in their particular fields. So they recognized the complexity of adopting a new curriculum, particularly one that required such instructional shifts as a reform curriculum would. What they did is they invested for multiple years in meaningful deep implementation studies, to understand how it was looking in classrooms, how it was thriving, what were the enabling conditions that either led to the success, or the lack of success, or sustainability of the curriculum, et cetera. Then later on, once they had a really full understanding as to what it could look like, they created a Should-statement, and then created rubrics. Then now they're commissioning a really large company to come in and do the multimillion dollar efficacy study that a lot of organizations strive for.

On the other hand, I've worked for organizations who do the other way around. They understand the market pressure to demonstrate efficacy. So they'll go and do a quick efficacy study, they'll get the results, they'll look at it, and say, "Neat, now we need to understand why on earth we're making a difference?" So then they'll later on invest in implementation studies after they have the efficacy study to really understand a little bit more about it. So I would encourage any organization out there, just because you may have done something that hasn't been your favorite in the past, or you have limited information, there's definitely more learning to be done that can be very exciting to do.

Elana Leoni:

All right. Gosh, every time I see you, I'm like, "Let's just talk, let's talk more. Let's talk more about professional development. Let's talk more about evaluation, because there's so much, and there's so much nuance. But for those of you listening, I hope this gave you a little tidbit. And at the very least you thought, "Wow, maybe we're not thinking about the role of efficacy and research, and how it relates to what we're doing as strategic as we could be? Maybe we're not set up to integrate it in?"

So I want you to pause, maybe have some conversations with your leadership. If you are an educator or a leader in education evaluating a lot of EdTech, think about the resources that you have to critically evaluate any research that they're pulling up too, and saying, "Here's how we map the student outcome." And then what role in terms of all of the things you look at to select an EdTech provider, where does efficacy and research play within it? So these are open-ended questions. I have people on like Shelby just to broaden your perspective, and go, "Gosh, I didn't even know there are people like Shelby out there, first of all, that devote their career to this." And what a beautiful compliment to EdTech? Because what else are we doing than actually trying to make an impact? And if we don't know and we don't measure it effectively with all the nuance in K-12, then what are we even doing?

So thank you Shelby for coming on. One of the last questions we ask our guests, is always around the field of education. Even though it's beautiful and mission-driven, and we, like you said, you were in the classroom and you get to see that light bulb in students' eyes, and now you get the design studies to really prove that things are making a difference, it still can be exhausting. It still can be draining. So what do you do in those days, where your tank is empty and you need to recharge? Are there specific habits or rituals that you do that help put a pep in your step?

Shelby Danks:

Absolutely. Well, I'm an avid reader. I probably read too much. However, I'm getting better at this. And so recently, believe it or not, I have learned how to ride a motorcycle. My husband, for his birthday, bought me a motorcycle, so that we can go riding together.

And it's been an interesting journey. I think learning to ride a motorcycle, it's very spiritual almost, because you learn this principle of how do you learn to ride within your skill level? It's called risk offset. It's about how do you constantly manage how you ride within what you know you can do. And every single time I do, I think about that. I think about education and how teachers think about how they teach within their skill level, and how we can better support that as they grow. So it's been a lot. On the one hand, it's fun for recharging. On the other hand, I always end up thinking more about work, so it's just such a pleasure to learn that new thing.

Elana Leoni:

Well, that's awesome. I would never have guessed that. And I think it's funny that he bought you a motorcycle for his birthday?

Shelby Danks:

Yeah, [inaudible:

Elana Leoni:

That's awesome. Well, Shelby, thank you so much for coming on our show. I will put all of the resources you give me on our show notes, and just thank you for your time and the passion you bring to the industry.

Shelby Danks:

You're so welcome. Thanks for having me.

Elana Leoni:

Take care.

Next Episode All Episodes Previous Episode
Show artwork for Marketing and Education

About the Podcast

Marketing and Education
A podcast about social media marketing, community-building, and content marketing strategies.
What if marketing was judged solely by the level of value it brings to its audience? Welcome to All Things Marketing and Education, a podcast that lives at the intersection of marketing and you guessed it, education. Each week, Elana Leoni, CEO of Leoni Consulting Group, highlights innovative social media marketing, community-building, and content marketing strategies that can significantly increase brand awareness, engagement, and revenue.

About your host

Profile picture for Elana Leoni

Elana Leoni

I'm Elana Leoni. I've devoted my career to helping education brands build awareness, engagement, and revenue and I'd like to show you how as well. Every week, you'll learn how to increase your social media presence, build a community, and create content that matters to your audience.