Measurable Outcomes

Following a conversation on twitter about the phonics screening test administered in primary school, I have a few thoughts about how it’s relevant to secondary science. First, a little context – especially for colleagues who have only the vaguest idea of what I’m talking about. I should point out that all I know about synthetic phonics comes from glancing at materials online and helping my own kids with reading.

Synthetic Phonics and the Screening Check

This is an approach to teaching reading which relies on breaking words down into parts. These parts and how they are pronounced follow rules; admittedly in English it’s probably less regular than many other languages! But the rules are useful enough to be a good stepping stone. So far, so good – that’s true of so many models I’m familiar with from the secondary science classroom.

The phonics screen is intended, on the face of it, to check if individual students are able to correctly follow these rules with a sequence of words. To ensure they are relying on the process, not their recall of familiar words, nonsense words are included. There are arguments that some students may try to ‘correct’ those to approximate something they recognise – the same way as I automatically read ‘int eh’ as ‘in the’ because I know it’s one of my characteristic typing mistakes. I’m staying away from those discussions – out of my area of competence! I’m more interested in the results.

Unusual Results

We’d expect most attributes to follow a predictable pattern over a population. Think about height in humans, or hair colour. There are many possibilities but some are more common than others. If the distribution isn’t smooth – and I’m sure there are many more scientific ways to describe it, but I’m using student language because of familiarity – then any thresholds are interesting by definition. They tell us, something interesting is happening here.

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny …”

Possibly Isaac Asimov. Or possibly not.

It turns out that with the phonics screen, there is indeed a threshold. And that threshold just so happens to be at the nominal ‘pass mark’. Funny coincidence, huh?

The esteemed Dorothy Bishop, better known to me and many others as @deevybee, has written about this several times. A very useful post from 2012 sums up the issue. I recommend you read that properly – and the follow-up in 2013, which showed the issue continued to be of concern – but I’ve summarised my own opinion below.

phonics plot 2013
D Bishop, used with permission.

Some kids were being given a score of 32 – just passing – than should have been. We can speculate on the reasons for this, but a few leading candidates are fairly obvious:

  • teachers don’t want pupils who they ‘know’ are generally good with phonics to fail by one mark on a bad day.
  • teachers ‘pre-test’ students and give extra support to those pupils who are just below the threshold – like C/D revision clubs at GCSE.
  • teachers know that the class results may have an impact on them or the school.

This last one is the issue I want to focus on. If the class or school results are used in any kind of judgment or comparison, inside or outside the school, then it is only sensible to recognise that human nature should be considered. And the pass rate is important. It might be factor when it comes time for internal roles. It might be relevant to performance management discussions and/or pay progression. (All 1% of it.)

“The teaching of phonics (letters and the sounds they make) has improved since the last inspection and, as a result, pupils’ achievement in the end of Year 1 phonics screening check has gradually risen.”

From an Ofsted report

Would the inspector in that case have been confident that the teaching of phonics had improved if the scores had not risen?

Assessment vs Accountability

The conclusion here is obvious, I think. Most of the assessment we do in school is intended to be used in two ways; formatively or summatively. We want to know what kids know so we can provide the right support for them to take the next step. And we want to know where that kid is, compared to some external standard or their peers.

Both of those have their place, of course. Effectively, we can think of these as tools for diagnosis. In some cases, literally that; I had a student whose written work varied greatly depending on where they sat. His writing was good, but words were spelt phonetically (or fonetically) if he was sat anywhere than the first two rows. It turned out he needed glasses for short-sightedness. The phonics screen is or was intended to flag up those students who might need extra support; further testing would then, I assume, suggest the reason for their difficulty and suggested routes for improvement.

If the scores are also being used as an accountability measure, then there is a pressure on teachers to minimise failure among their students. (This is not just seen in teaching; an example I’m familiar with is ambulance response times which I first read about in Dilnot and Blastland’s The Tiger That Isn’t, but issues have continued eg this from the Independent) Ideally, this would mean ensuring a high level of teaching and so high scores. But if a child has an unrecognised problem, it might not matter how well we teach them; they’re still going to struggle. It is only by the results telling us that – and in some cases, telling the parents reluctant to believe it – that we can help them find individual tactics which help.

And so teachers, reacting in a human way, sabotage the diagnosis of their students so as not to risk problems with accountability. Every time a HoD puts on revision classes, every time students were put in for resits because they were below a boundary, every time an ISA graph was handed back to a student with a post-it suggesting a ‘change’, every time their PSA mysteriously changed from an okay 4 to a full-marks 6, we did this. We may also have wanted the best for ‘our’ kids, even if they didn’t believe it! But think back to when league tables changed so BTecs weren’t accepted any more. Did the kids keep doing them or did it all change overnight?

And was that change for the kids?

Any testing which is high-stakes invites participants to try to influence results. It’s worth remembering that GCSE results are not just high-stakes for the students; they make a big difference to us as teachers, too! We are not neutral in this. We sometimes need to remember that.

With thanks to @oldandrewuk, @deevybee and @tom_hartley for the twitter discussion which informed and inspired this post. All arguments are mine, not theirs.

Responding to “Secret Origins”

This post is a duplicate of the comment I’ve just left on a post at Vince Ulam’s blog; it’s here because otherwise the time I spent on formatting and adding hotlinks was wasted.

“These useful idiots, grateful for the imagined recognition and eager to seem important in the eyes of their peers, promote the aims and ideas of their recruiters across social media and via ticketed salons.”

It must be really nice to see yourself as immune to all this, too smart to fall for the conspiracy that everyone else has been duped by. Because, whether you intended it or not, that’s how much of the original post comes across. I think this is what put my back up, to be honest. I’ve attended two ResearchED events, one of which I spoke at. I’d like to think I earned that, rather than being recruited as a useful idiot. But then, in your viewpoint, it’s only natural I’d fall for it: I’m not as clever as you. The contrary argument might be that you’re resentful of not having the opportunity or platform for your views, but I’ve no idea if you’ve applied to present at ResearchED or anything similar. So how about we look at the facts, rather than the inferences and assigned motives you write about?

ResearchED in Context

From a local teachmeet up to national events, the idea of ‘grassroots’ activism in teaching is a powerful one. As bloggers, we both believe that practitioners can influence the ideas and work of others. And yes, I agree that appearing practitioner- or public-led, but actually being influenced by specific political parties or organisations, would be appealing to those organisations. It would lend legitimacy to very specific ideas. You only have to look at the funding of patient organisations by pharmaceutical companies, or VoteLeave and allied groups, to see the issues. But there is surely a sliding scale of influence here.

How we assess the independence of such a grassroots organisation could be done in several ways. Do we look at where the money comes from? Do we examine the people involved in organising or leading it? Do we look at the decisions they make, and how they are aligned with other groups? Do we look at who chooses to be involved, and who is encouraged/dissuaded, subtly or otherwise?

In reality we should do all of those. I think my issue with your post is that you seem to be putting ResearchEd in the same category as the New Schools Network among other groups, and (on Twitter) to be adding in the Parents and Teachers for Excellence Campaign too. I see them as very separate cases, and I’m much less hesitant about ResearchEd – partly because the focus is teacher practice and engagement, not campaigning. And you raise Teach First, which I have my own concerns about and am leaving to one side now as it’s not relevant.

The New Schools Network is (mostly) funded by government, and many have written about the rather tangled set of circumstances which led to the funding and positions expressed being so closely tied to a policy from one political party. I must admit, I find myself very dubious about anything that Dominic Cumming has had a hand in! Their advocacy and support for free schools, with so far limited evidence that they provide good value for money, frustrates me.

The PTE Campaign is slightly different. I’ve not spent time on searching for funding information but remember from previous news items – this from Schools Week for example – that it lacks transparency, to say the least. I think the name is misleading and their claim to be about moving power away from ‘the elites in Westminister and Whitehall’ to be disingenuous.

And let’s not even start with Policy Exchange.

From where I sit, if you want to group ResearchED with other education organisations, a much better match would seem to be Northern Rocks. The focus is improving and sharing classroom pedagogy, rather than campaigning. They’re both run on a shoestring. Classroom teachers are keen on attending and praise what they get out of the sessions. I can’t find anything on your blog about Northern Rocks, but that could be simple geography. (The bitter part of me suggests it’s not the first time anything happening past Watford gets ignored…)

Back to ResearchED: Funding and Speakers

“We have to hand it to Tom Bennett for his truly amazing accomplishment of keeping his international ‘grassroots’ enterprise going for four years without producing any apparent profits.”

Maybe it’s me seeing something which isn’t there, but your post seems to imply that there must be some big funding secret that explains why ResearchED is still going. What do you think costs so much money? The speakers are volunteers, as are the conference helpers. I don’t know if Tom gets a salary, but considering how much time it must be taking it would seem reasonable for at least a few people to do so. The catering costs, including staffing, are covered by the ticket price. The venues I remember are schools, so that’s not expensive.

As you’ve raised on Twitter during our discussions, the question of transport for UK-based speakers to overseas venues is an interesting one. I know that when I presented at Oxford (the Maths/Science one), my employer covered my travel costs; I assume that was the same for all speakers, or they were self-funding. If you have other specific funding concerns, I’ve not seen you describe them; you can hardly blame me for focusing on this one if you’d rather make suggestive comments than ask proper questions. I would also like to know if speakers can access funding support and if so, how that is decided. I can’t find that information on the website, and I think it should be there. I disagree with lots of what you say – or I wouldn’t have written all this – but that loses legitimacy if I don’t say where we have common ground.

I was surprised to find out how many ResearchED conferences there had been; I was vaguely thinking of seven or eight, which is why I was surprised by your suggestion that David Didau had presented at least six times. I stand corrected, on both counts. Having looked at the site, I’m also surprised that there’s no clear record of all the events in one place. A bigger ask – and one I have addressed to one of the volunteers who I know relatively well – would be for a searchable spreadsheet of speaker info covering all the conferences.

That would be fascinating, wouldn’t it? It would let us see how many repeat speakers there are, and how concentrated the group is. My gut feeling is that most speakers, like me, have presented only once or twice. Researchers would probably have more to say. I’d love to see the gender balance, which subject specialisms are better represented, primary vs secondary numbers, the contrast between state and independent sector teachers, researcher vs teacher ratios…

I’m such a geek sometimes.

You tweeted a suggestion I should ignore my personal experience to focus on the points in your post. The thing is that my personal experience of – admittedly only two – ResearchED conferences is that any political discussion tends to happen over coffee and sandwiches, and there’s relatively little of that. Maybe there’s more at the ‘strategic’ sessions aimed at HTs and policy-makers, rather than the classroom and department methods that interest me. If there’s animosity, it’s more likely to be between practitioners and politicians, rather than along party lines. I suspect I have more in common, to be honest, with a teacher who votes Tory than a left-leaning MP without chalkface experience. It’s my personal experience that contradicts the suggestions in your post about ResearchED being part of a shadowy conspiracy to influence education policy debate.

To return to Ben Goldacre, featured in your post as a victim of the puppet-masters who wanted a good brand to hide their dastardly plans behind: his own words suggest that in the interests of improving the evidence-base of policy, he’s content to work with politicians. Many strong views have been expressed at ResearchED. With such a wide variety of speakers, with different political and pedagogical viewpoints, I’m sure you can find some presentations and quotes that politicians would jump on with glee. And I’m equally sure that there are plenty they ignore, politely or otherwise. But I don’t believe the speakers are pre-screened for a particular message – beyond “looking at evidence in some way is useful for better education.” To be honest, I’m in favour of that – aren’t you? If there’s other bias in speaker selection, it was too subtle for me to notice.

But then, I’m not as clever as you.

Revision Templates, Organised

A perpetual classroom problem is that students translate what we say into what they want to do. How many times have you come back from time off to see that students answered questions 1 and 10, not 1 to 10? Sometimes this is deliberate awkwardness. Sometimes it’s an actual lack of understanding, either of what the task was or why we’re asking them to do it in what seems ‘the hard way’. I’ve long been a fan of the template approach, giving students a framework so they’ve got a place to get started. And I produced a bunch of resources, some of which may be useful for you. I’ve shared these before, here and there, but figured a fresh post was worthwhile. This was mainly prompted by a tweet from a colleague:

So here’s a quick reminder of some printable resources. I’m not going to go through and remove the QR code, but it now goes to a dead link. Feel free to mess around with them as you see fit.

Some of these can be downloaded as Office files, mainly docx and pub (links to a GDrive folder). There may also be jpg versions available for adding to Powerpoints or websites. If there’s no editable version of an example above that you’re after, add a comment here and I’ll dig it up.

If you’ve not already seen it (not sure how, but it’s possible), can I strongly recommend the excellent posters and resources available from the team at @acethattest, AKA The Learning Scientists. On my long and growing jobs list is producing some Physics specific versions to show how they could be applied within a subject.



Variations on a Theme

It turns out that I’m really bad at following up conference presentations.

Back in early June, I offered a session on teachers engaging – or otherwise – with educational research. It all grew out of an argument I had on Twitter with @adchempages, who has since blocked me after I asked if the AP Chem scores he’s so proud of count as data. He believes, it seems, that you cannot ever collect any data from educational settings, and that he has never improved his classroom practice by using any form of educational research.

But during the discussions I got the chance to think through my arguments more clearly. There are now three related versions of my opinion, quite possibly contradictory, and I wanted to link to all three.

Version the first: Learning From Mistakes, blogged by me in January.

Streamlined version written for the BERA blog: Learning From Experience. I wrote this a while back but it wasn’t published by them until last week.

Presentation version embedded below (and available from if you’re interested).

I’d be interested in any and all comments, as ever. Please let me know if I’ve missed any particular comments from the time – this is the problem with being inefficient. (Or, to be honest, really busy.) The last two slides include all the links in my version of a proper references section.

Thoughts from the presentation

Slide 8: it’s ironic that science teachers, who know all about using models which are useful even though they are by necessity simplified, struggle with the idea that educational research uses large numbers of participants to see overall patterns. No, humans aren’t electrons – but we can still observe general trends using data.

Slide 13: it’s been pointed out to me that several of the organisations mentioned offer cheaper memberships/access. These are, however, mainly institutional memberships (eg £50/yr for the IOP) which raises all kinds of arguments about who pays and why.

Slide 14: a member of the audience argued with this point, saying that even if articles weren’t open-access any author would be happy to share electronic copies with interested teachers. I’m sure he was sincere, and probably right. But as I tried to explain, this assumes that (1)the teacher knows what to ask for, which means they’ll miss all kinds of interesting stuff they never heard about and that (2)the author is happy to respond to potentially dozens of individual requests. Anyone other than the author or journal hosting or sharing a PDF is technically breaking the rules.

Slide 16: Ironically, the same week as I gave the presentation there was an article in SSR on electricity analogies which barely mentioned the rope model. Which was awkward as it’s one of the best around, explored and endorsed by the IOP among many others.

Slide 20: Building evidence-based approaches into textbooks isn’t a new idea (for example, I went to Andy’s great session on the philosophy behind the Activate KS3 scheme) but several tweeters and colleagues liked the possibility of explicit links being available for interested teachers.

Reflective Observation

I’ve been pretty quiet recently – at least it feels like I’ve not been offering much to the conversation. There are several reasons, but a big part of it is that with paid freelance work I’ve really not been able to justify the time to do things for free. I’m not going to apologize for this because I’m sure you’ll all understand that without this work my family and I can’t go on holiday.
But I’ve missed you all, even if you’ve not been missing me.
This will be a quick post, hopefully to be followed up over the next week with another. I’ve been working in a school a couple of days a week, mixing teacher coaching with some intervention classes. It’s been interesting – and enjoyable, at least after the kids stopped swearing at me – so I thought it might be worth sharing a few things I’ve done.
I’m currently reading Mentoring Mathematics Teachers, effectively a collection of research papers published as a book. Now, I don’t teach maths – except in the process of getting the physics right – but I’ve found it really interesting. It’s mainly aimed at in-school mentors for pre-service teachers (PGCE, School Direct or similar) and NQTs. I’ve got a strong interest in how we can support teachers for a longer period than just a year, and in my day job we mentor ‘Early Career Teachers’ to the end of their second year post-qualification. I’m working through about a chapter a week, making notes in the margins, and really need to blog some of the ideas. So it was perfect timing to come to Chapter 9 by Lofthouse and Wright, about encouraging reflection by using a pro forma for observations. I’ve adapted it slightly with a fair bit of success and wish I’d been using it for longer.
As a physics teacher, I feel I should now make the point that teaching is a quantum process which is changed simply by the act of being observed. If you laughed at that, congratulations and please pick up your Physics Education Geek badge on the way out.
observation pro forma
Click for PDF version

There are four stages:

  1. The ‘observee’ defines one or two aspects they want to focus on, choosing a couple of questions for the observer to bear in mind.
  2. The observer makes notes of specific features in the lesson relating to these questions – no judgment, just facts.
  3. The observer poses questions based on these features to prompt reflection and discussion.
  4. Together, the colleagues plan future actions based on the outcome of these prompts, leading to questions for the next observed lesson.
The aim of this structure is to encourage reflective practice rather than “I saw X and you should try Y instead.” In this way both teachers gain from it as there isn’t necessarily a hierarchy in place. It would work just as well when an experienced teacher is observed by a novice, with the questions directing them towards interesting features of the lesson. I can also see it being useful for peer observation – and like all such activities, it would work best when well-separated from any kind of performance management process.
I should emphasize that this is my take on the process rather than a paraphrased version of the original. And, of course, I’m still tweaking it! Currently I’m following up soon after the lesson but wonder if leaving the sheet with the observed teacher so they can think about the prompts more deeply might be worthwhile. I’m numbering the evidence I see and then grouping them in the ‘Reflection Prompts’ section if appropriate – this helps me gather my thoughts and gives more than one relevant example.
EDIT: I recommend reading a great post by @bennewmark, Finding a Voice, for the issues that can arise when an observee tries to replan a lesson based on well-meaning comments from a colleague.
Please help yourself to the printable version, try it out and let me know what you think. Maybe everyone else has something better already – it’s two years since I had a lesson observed! But I’d appreciate, as ever, any feedback or suggestions.

Learning from Mistakes

“So I was arguing on Twitter…”

That’s how all the best blog posts start, just like the best fairy tales start with “Once upon a time…” In this case, it wasn’t a new argument – in fact it was a disagreement I’ve had before, with the same person. But it’s also something which has been discussed in staffrooms all over the country, probably all over the world. A version of it has been had any time two people with the same job compare notes.

How can we be the best professionals possible without making all the mistakes personally?

It’s true that people learn from mistakes. Sometimes. When we recognize them. When we can change our behaviour based on that insight. When we’re not too hungry, angry, lonely or tired. When we have the chance to reflect on our actions and plan for ‘the next time’. When we can successfully generalize our specific experience.

I was having this conversation, for the hundredth time, with my eldest this week. In particular, we were talking about how the only thing better than learning from your own mistakes is to learn from somebody else’s. It’s generally less painful, expensive and embarrassing. We talked about how, perhaps, it’s the pain of our own mistakes which means they stick better.

Teaching from Mistakes

In education, we learn a lot from screwing up ourselves. From not labeling the beakers, from letting year 7 use powerpacks with 1A bulbs, to mixing up the two Rebeccas in your class during parents’ evening. We also, especially early in our career, learn a lot from watching our colleagues, deliberately or in passing.

(Brief digression: we should do more of this. Short observations, team-teaching, co-planning, watching a practical, seeing how they manage a demonstration, the ‘spiel’ for radioactive samples… all great chances to learn from a colleague and give them the ‘view from the back’. Go into an A-level English Lit lesson and talk for ten minutes about the ‘science’ of Frankenstein’s Creature, or invite a music teacher colleague into your Sound lesson to demonstrate high and low pitch. The important thing is to make a solemn promise that this will never show up on performance management.)

The argument I had seemed to come down to one principle. I think that we as teachers can – and should – learn from the successes and mistakes of other teachers as summed up in research. My counterpart feels that if someone isn’t a good teacher, they never will be, and that there’s nothing he can learn about teaching outside of a classroom. He sees educational research as a waste of his time.

But there’s a lot of research out there, which means a lot of student experiences added up to suggestions. Test results that might make patterns, implying how one approach on average works better than another. Don’t get me wrong – there’s a lot of crap, too. There’s a lot of context-free claims, a lot of ‘studies’ carried out without a control group, action research subject to the Hawthorne Effect and so on. But the argument I had – in this case and before – wasn’t about the bad ‘research’ that’s out there. It was about the very idea that educational research should or could guide our practice at all. And to me, that just seems weird.


During the conversation, @adchempages also used #peoplearenotelectrons. Which is true. But isn’t the whole point of science to use models, simpler than reality, to give us an indication of how reality works? We can model people as particles making up a fluid when we design corridors and stairwells. And that gives us useful information. Nobody suggests that those people travelling on the Underground are actually faceless, indistinguishable drones. (I’m saving the sarcastic comment as it would undermine my point.) But with enough data, and enough people, we can make good predictions about what will usually happen most of the time. There are caveats:

  • Averages using large numbers aren’t specific to a small subset, even if homeogenous
  • There are lots of confounding variables, some of which are unknown
  • Kids are all different and there’s a fine line between describing and defining them
  • Many anecdotes are not the same as data
  • We tend to find/remember the results which confirm our expectations


I feel like I’ve been here before. In fact, I have – I wrote a similar post back in 2013 about how I might design a trial, and there’s also my post from when the Evidence-Based Bandwagon was taking off. But it’s worth revisiting as long as we are critical about research. We need to be able to ask good questions about the sample sizes, about the methodology, about sources of potential bias. But then we need to take on board the advice and try applying it to our own classes. Let’s imagine a way to test someone’s willingness to use research in their own practice.


  1. Recruit lots of teachers, teaching same subject to same age group.
  2. Match ‘equivalent classes’ or ideally randomize.
  3. Choose two interventions (or simply the same activities in a different order, eg theory then practical or the reverse.)
  4. Compare results of the kids in the same test.
A difference between the two averages might be significant (suggesting a real difference) or not (could be due to random chance). The bigger the numbers, the more we should pay attention to that difference. There are lots of statistical tests we could argue about, but for now let’s assume the difference is dramatic enough to convince us that one intervention is better than the other for students learning this concept. Why would you ignore that hint when planning your own lessons?
Any two classes might be compared without spotting this pattern. Only wider research lets us see what’s going on. The difference might be so small that we decide it doesn’t matter. It might turn out that one intervention works better for girls, the other for boys (which then leads to a hugely political issue, doesn’t it?!). But if we don’t ask, then we’ll never know.
When we look at research, we need to remember that our class might be so different that it doesn’t apply. But if so we need to base that on data, not just ‘because I said so.’ I’m not saying instinct should be ignored, but let’s try informed judgment. Research won’t often give a recipe. It won’t turn us into robots or allow our jobs to be done by computer. What it can do is inform and guide. It can suggest good starting points, or approaches that, more often than not, will be the best way to teach a concept.
We could ‘teach’ science by giving the equation, a load of examples and walking away. But we don’t. Because the data shows that it doesn’t work as well for most students as considering possible links between variables, investigating patterns, explicitly eliminating confounding factors, describing a proportional relationship between cause and effect and then putting this into mathematical terms with fixed values.
In my day job with the IOP, one of the ideas that is really useful at KS3 and KS4 for teaching circuits is the rope model. It’s not new, and it’s not something we invented from nothing. It’s based on research, including ideas summarized in the classic Making Sense of Secondary Science, showing that previous models caused misconceptions about current. It avoids what I call the ‘electron delivery’ trap in models used such as pizza delivery trucks, allowing for clearer explanations of AC later on, as well as being a ‘hands-on’ rather than imagined model.
It’s interesting that @adchempages chooses to describe teaching as an art, rather than a science. I can see what he means, in a way. But I’d suggest that there’s a middle-ground. Is it better to think of teaching as a craft? It might be ‘in person’ rather than strictly ‘hands-on’, but that word hints more at the professional judgment and individual style involved than the common perception of a science. Crafts traditionally guarded their secrets from outsiders but shared them openly within the group or guild. The second part, at least, is a model we should aspire to. Let’s think of research as just a conversation within a larger staffroom, and maybe we can avoid making all the mistakes ourselves.

Book Swap


Six weeks of summer holiday stretching ahead and I’ve laid in a stockpile of books, both paper and electronic, to keep me out of trouble. I’ve also got a long list of saved articles to catch up on; lesson study is something I want to look into much more closely, for example.

Every term or so I’ve been buying a book that’s relevant to my teaching. These alternate, vaguely, between vaguely popular science and education. I want to be a better teacher and engaging with a good book can’t hurt. I’ve always liked paper copies, because it’s easier to scribble in the margins. (I am looking at ways to annotate ebooks and then share/search main points, but that’s another post.) But this means that I’ve got overflowing bookshelves.

Could you help?

I’d like to start some book swapping. Choose one of the books by adding a comment, let me know your address by email and I’ll post it your way. It doesn’t count as CPD unless you think about it, so when you’re done type something about the book. Good points and bad, ideas you liked or how you’ve put it into practice. I’ll host that as a guest piece and/or link to your own site.

Maybe you’ve got books you’d like to offer as loans to fellow teachers? (If you don’t already do something like this in your own school, can I suggest you set it up first to save postage costs?) If so, include a list of titles/authors, maybe with a few words about who might get the most out of reading, in the comments. It should be really easy for us all to get a couple of new teaching books to inspire us over the next few months, for a few stamps instead of the often high purchase cost. And then the discussion will help us develop the ideas further.

Worth a try? You know what to do.