“So I was arguing on Twitter…”
That’s how all the best blog posts start, just like the best fairy tales start with “Once upon a time…” In this case, it wasn’t a new argument – in fact it was a disagreement I’ve had before, with the same person. But it’s also something which has been discussed in staffrooms all over the country, probably all over the world. A version of it has been had any time two people with the same job compare notes.
How can we be the best professionals possible without making all the mistakes personally?
It’s true that people learn from mistakes. Sometimes. When we recognize them. When we can change our behaviour based on that insight. When we’re not too hungry, angry, lonely or tired. When we have the chance to reflect on our actions and plan for ‘the next time’. When we can successfully generalize our specific experience.
I was having this conversation, for the hundredth time, with my eldest this week. In particular, we were talking about how the only thing better than learning from your own mistakes is to learn from somebody else’s. It’s generally less painful, expensive and embarrassing. We talked about how, perhaps, it’s the pain of our own mistakes which means they stick better.
Teaching from Mistakes
In education, we learn a lot from screwing up ourselves. From not labeling the beakers, from letting year 7 use powerpacks with 1A bulbs, to mixing up the two Rebeccas in your class during parents’ evening. We also, especially early in our career, learn a lot from watching our colleagues, deliberately or in passing.
(Brief digression: we should do more of this. Short observations, team-teaching, co-planning, watching a practical, seeing how they manage a demonstration, the ‘spiel’ for radioactive samples… all great chances to learn from a colleague and give them the ‘view from the back’. Go into an A-level English Lit lesson and talk for ten minutes about the ‘science’ of Frankenstein’s Creature, or invite a music teacher colleague into your Sound lesson to demonstrate high and low pitch. The important thing is to make a solemn promise that this will never show up on performance management.)
The argument I had seemed to come down to one principle. I think that we as teachers can – and should – learn from the successes and mistakes of other teachers as summed up in research. My counterpart feels that if someone isn’t a good teacher, they never will be, and that there’s nothing he can learn about teaching outside of a classroom. He sees educational research as a waste of his time.
But there’s a lot of research out there, which means a lot of student experiences added up to suggestions. Test results that might make patterns, implying how one approach on average works better than another. Don’t get me wrong – there’s a lot of crap, too. There’s a lot of context-free claims, a lot of ‘studies’ carried out without a control group, action research subject to the Hawthorne Effect and so on. But the argument I had – in this case and before – wasn’t about the bad ‘research’ that’s out there. It was about the very idea that educational research should or could guide our practice at all. And to me, that just seems weird.
During the conversation, @adchempages also used #peoplearenotelectrons. Which is true. But isn’t the whole point of science to use models, simpler than reality, to give us an indication of how reality works? We can model people as particles making up a fluid when we design corridors and stairwells. And that gives us useful information. Nobody suggests that those people travelling on the Underground are actually faceless, indistinguishable drones. (I’m saving the sarcastic comment as it would undermine my point.) But with enough data, and enough people, we can make good predictions about what will usually happen most of the time. There are caveats:
- Averages using large numbers aren’t specific to a small subset, even if homeogenous
- There are lots of confounding variables, some of which are unknown
- Kids are all different and there’s a fine line between describing and defining them
- Many anecdotes are not the same as data
- We tend to find/remember the results which confirm our expectations
I feel like I’ve been here before. In fact, I have – I wrote a similar post back in 2013 about how I might design a trial, and there’s also my post from when the Evidence-Based Bandwagon was taking off. But it’s worth revisiting as long as we are critical about research. We need to be able to ask good questions about the sample sizes, about the methodology, about sources of potential bias. But then we need to take on board the advice and try applying it to our own classes. Let’s imagine a way to test someone’s willingness to use research in their own practice.
- Recruit lots of teachers, teaching same subject to same age group.
- Match ‘equivalent classes’ or ideally randomize.
- Choose two interventions (or simply the same activities in a different order, eg theory then practical or the reverse.)
- Compare results of the kids in the same test.
A difference between the two averages might be significant (suggesting a real difference) or not (could be due to random chance). The bigger the numbers, the more we should pay attention to that difference. There are lots of statistical tests we could argue about, but for now let’s assume the difference is dramatic enough to convince us that one intervention is better than the other for students learning this concept. Why would you ignore that hint when planning your own lessons?
Any two classes might be compared without spotting this pattern. Only wider research lets us see what’s going on. The difference might be so small that we decide it doesn’t matter. It might turn out that one intervention works better for girls, the other for boys (which then leads to a hugely political issue, doesn’t it?!). But if we don’t ask, then we’ll never know.
When we look at research, we need to remember that our class might be so different that it doesn’t apply. But if so we need to base that on data, not just ‘because I said so.’ I’m not saying instinct should be ignored, but let’s try informed judgment. Research won’t often give a recipe. It won’t turn us into robots or allow our jobs to be done by computer. What it can do is inform and guide. It can suggest good starting points, or approaches that, more often than not, will be the best way to teach a concept.
We could ‘teach’ science by giving the equation, a load of examples and walking away. But we don’t. Because the data shows that it doesn’t work as well for most students as considering possible links between variables, investigating patterns, explicitly eliminating confounding factors, describing a proportional relationship between cause and effect and then putting this into mathematical terms with fixed values.
In my day job with the IOP, one of the ideas that is really useful at KS3 and KS4 for teaching circuits is the rope model
. It’s not new, and it’s not something we invented from nothing. It’s based on research
, including ideas summarized in the classic Making Sense of Secondary Science
, showing that previous models caused misconceptions about current. It avoids what I call the ‘electron delivery’ trap in models used such as pizza delivery trucks, allowing for clearer explanations of AC later on, as well as being a ‘hands-on’ rather than imagined model.
It’s interesting that @adchempages chooses to describe teaching as an art, rather than a science. I can see what he means, in a way. But I’d suggest that there’s a middle-ground. Is it better to think of teaching as a craft? It might be ‘in person’ rather than strictly ‘hands-on’, but that word hints more at the professional judgment and individual style involved than the common perception of a science. Crafts traditionally guarded their secrets from outsiders but shared them openly within the group or guild. The second part, at least, is a model we should aspire to. Let’s think of research as just a conversation within a larger staffroom, and maybe we can avoid making all the mistakes ourselves.