Responding to “Secret Origins”

This post is a duplicate of the comment I’ve just left on a post at Vince Ulam’s blog; it’s here because otherwise the time I spent on formatting and adding hotlinks was wasted.

“These useful idiots, grateful for the imagined recognition and eager to seem important in the eyes of their peers, promote the aims and ideas of their recruiters across social media and via ticketed salons.”

It must be really nice to see yourself as immune to all this, too smart to fall for the conspiracy that everyone else has been duped by. Because, whether you intended it or not, that’s how much of the original post comes across. I think this is what put my back up, to be honest. I’ve attended two ResearchED events, one of which I spoke at. I’d like to think I earned that, rather than being recruited as a useful idiot. But then, in your viewpoint, it’s only natural I’d fall for it: I’m not as clever as you. The contrary argument might be that you’re resentful of not having the opportunity or platform for your views, but I’ve no idea if you’ve applied to present at ResearchED or anything similar. So how about we look at the facts, rather than the inferences and assigned motives you write about?

ResearchED in Context

From a local teachmeet up to national events, the idea of ‘grassroots’ activism in teaching is a powerful one. As bloggers, we both believe that practitioners can influence the ideas and work of others. And yes, I agree that appearing practitioner- or public-led, but actually being influenced by specific political parties or organisations, would be appealing to those organisations. It would lend legitimacy to very specific ideas. You only have to look at the funding of patient organisations by pharmaceutical companies, or VoteLeave and allied groups, to see the issues. But there is surely a sliding scale of influence here.

How we assess the independence of such a grassroots organisation could be done in several ways. Do we look at where the money comes from? Do we examine the people involved in organising or leading it? Do we look at the decisions they make, and how they are aligned with other groups? Do we look at who chooses to be involved, and who is encouraged/dissuaded, subtly or otherwise?

In reality we should do all of those. I think my issue with your post is that you seem to be putting ResearchEd in the same category as the New Schools Network among other groups, and (on Twitter) to be adding in the Parents and Teachers for Excellence Campaign too. I see them as very separate cases, and I’m much less hesitant about ResearchEd – partly because the focus is teacher practice and engagement, not campaigning. And you raise Teach First, which I have my own concerns about and am leaving to one side now as it’s not relevant.

The New Schools Network is (mostly) funded by government, and many have written about the rather tangled set of circumstances which led to the funding and positions expressed being so closely tied to a policy from one political party. I must admit, I find myself very dubious about anything that Dominic Cumming has had a hand in! Their advocacy and support for free schools, with so far limited evidence that they provide good value for money, frustrates me.

The PTE Campaign is slightly different. I’ve not spent time on searching for funding information but remember from previous news items – this from Schools Week for example – that it lacks transparency, to say the least. I think the name is misleading and their claim to be about moving power away from ‘the elites in Westminister and Whitehall’ to be disingenuous.

And let’s not even start with Policy Exchange.

From where I sit, if you want to group ResearchED with other education organisations, a much better match would seem to be Northern Rocks. The focus is improving and sharing classroom pedagogy, rather than campaigning. They’re both run on a shoestring. Classroom teachers are keen on attending and praise what they get out of the sessions. I can’t find anything on your blog about Northern Rocks, but that could be simple geography. (The bitter part of me suggests it’s not the first time anything happening past Watford gets ignored…)

Back to ResearchED: Funding and Speakers

“We have to hand it to Tom Bennett for his truly amazing accomplishment of keeping his international ‘grassroots’ enterprise going for four years without producing any apparent profits.”

Maybe it’s me seeing something which isn’t there, but your post seems to imply that there must be some big funding secret that explains why ResearchED is still going. What do you think costs so much money? The speakers are volunteers, as are the conference helpers. I don’t know if Tom gets a salary, but considering how much time it must be taking it would seem reasonable for at least a few people to do so. The catering costs, including staffing, are covered by the ticket price. The venues I remember are schools, so that’s not expensive.

As you’ve raised on Twitter during our discussions, the question of transport for UK-based speakers to overseas venues is an interesting one. I know that when I presented at Oxford (the Maths/Science one), my employer covered my travel costs; I assume that was the same for all speakers, or they were self-funding. If you have other specific funding concerns, I’ve not seen you describe them; you can hardly blame me for focusing on this one if you’d rather make suggestive comments than ask proper questions. I would also like to know if speakers can access funding support and if so, how that is decided. I can’t find that information on the website, and I think it should be there. I disagree with lots of what you say – or I wouldn’t have written all this – but that loses legitimacy if I don’t say where we have common ground.

I was surprised to find out how many ResearchED conferences there had been; I was vaguely thinking of seven or eight, which is why I was surprised by your suggestion that David Didau had presented at least six times. I stand corrected, on both counts. Having looked at the site, I’m also surprised that there’s no clear record of all the events in one place. A bigger ask – and one I have addressed to one of the volunteers who I know relatively well – would be for a searchable spreadsheet of speaker info covering all the conferences.

That would be fascinating, wouldn’t it? It would let us see how many repeat speakers there are, and how concentrated the group is. My gut feeling is that most speakers, like me, have presented only once or twice. Researchers would probably have more to say. I’d love to see the gender balance, which subject specialisms are better represented, primary vs secondary numbers, the contrast between state and independent sector teachers, researcher vs teacher ratios…

I’m such a geek sometimes.

You tweeted a suggestion I should ignore my personal experience to focus on the points in your post. The thing is that my personal experience of – admittedly only two – ResearchED conferences is that any political discussion tends to happen over coffee and sandwiches, and there’s relatively little of that. Maybe there’s more at the ‘strategic’ sessions aimed at HTs and policy-makers, rather than the classroom and department methods that interest me. If there’s animosity, it’s more likely to be between practitioners and politicians, rather than along party lines. I suspect I have more in common, to be honest, with a teacher who votes Tory than a left-leaning MP without chalkface experience. It’s my personal experience that contradicts the suggestions in your post about ResearchED being part of a shadowy conspiracy to influence education policy debate.

To return to Ben Goldacre, featured in your post as a victim of the puppet-masters who wanted a good brand to hide their dastardly plans behind: his own words suggest that in the interests of improving the evidence-base of policy, he’s content to work with politicians. Many strong views have been expressed at ResearchED. With such a wide variety of speakers, with different political and pedagogical viewpoints, I’m sure you can find some presentations and quotes that politicians would jump on with glee. And I’m equally sure that there are plenty they ignore, politely or otherwise. But I don’t believe the speakers are pre-screened for a particular message – beyond “looking at evidence in some way is useful for better education.” To be honest, I’m in favour of that – aren’t you? If there’s other bias in speaker selection, it was too subtle for me to notice.

But then, I’m not as clever as you.

Advertisement

Dominic Cummings: Ghost Protocol

I saw Dominic Cummings at Northern Rocks. He was clearly impassioned, but I remain unconvinced by his solutions even though we recognise many of the same problems. Let’s think about how he got to where he is.

He was a Special Advisor to a range of government ministers, most recently Michael Gove. ‘SpAds’ are expected/allowed to be political but work within the civil service. They must follow a code of conduct and their minister is responsible for their actions:

The responsibility for the management and conduct of special advisers, including discipline, rests with the Minister who made the appointment

(from 2010 code linked above)

During his time at the DfE, there were many controversies about attacks on those within education who disagreed with the official line. I’m sure he was not responsible for all of them; equally, I’m sure he was instrumental in at least some. Gove himself has not been above personal attacks. The use of the @toryeducation twitter account is, officially at least, still a mystery – although many feel Cummings and a fellow SpAd, De Zoete were contributors. There were, I’m sure, many reasons he chose to resign last year. Since then he’s been fairly busy, and softly spoken as ever.

Special advisers must not take public part in political controversy whether in speeches or letters to the Press, or in books, articles or leaflets; must observe discretion and express comment with moderation, avoiding personal attacks; and would not normally speak in public for their Minister or the Department.

paragraph 12 from the Code of Conduct

 

Now, we have Netflix at home. (Last night I watched Tron: Legacy. Don’t judge me.) But more relevantly, a while back I watched Mission Impossible: Ghost Protocol. (What’s with all the colons?) In this, the Impossible Mission Force are ‘disavowed’ which means they’re officially not supported by the government, so can hopefully get away with stuff but avoid political repercussions.

Which made me think.

I offer the following satire for your amusement.

In a shadowy office at the DfE, late 2013.

MG: Dom, you know I agree with what you’re saying but you’re really not supposed to say all this stuff.

DC: &%$@{ing @$£ *&^@X& and their code of conduct.

MG: Yes, but I’m responsible for what you say and that means more hassle than it’s worth.

DC: Why does it have to be about blame?

MG: They keep banging on about accountability when everyone knows that public servants like teachers are accountable to us, not the other way around.

DC: But I’ve got all these really important ideas and loads of people disagree and they keep using facts to contradict me and then I get all mad and slag them off.

MG: Funnily enough I’d noticed that, but because you’re not an ex-journalist with the Times the papers aren’t as nice to you, so I get the flak.

DC: So the problem is that you’re accountable because I’m a SpAd?

MG: Exactly.

DC: But if I wasn’t a SpAd, then we wouldn’t be able to swap ideas over email. I couldn’t meet with you at the Department.

MG: Well, we wouldn’t be able to use the official email accounts, because they can be requested as freedom of information. But there are ways around that. And as for meetings, that would only be a problem if we actually kept records of who visited.

DC: So I could make all the claims I wanted, say whatever I liked (or you suggested), talk you up and slag everyone else off… but because I wasn’t a SpAd, there would be nothing Cameron or anyone else could do about it?

MG: Hmm.

Pause.

 

Divided and Conquered?

So I was on Twitter.

@TeacherROAR – who I follow – retweeted an item from @NUTSouthWest – who I don’t – which in turn quoted figures from an article in the Independant.

I followed the conversation and was struck by this tweet to another tweeting teacher.

followed by:

I responded in turn and a not particularly pleasant slanging match ensued. I had two main issues, one about Twitter and the other about teacher solidarity. Maybe I didn’t express myself well in 140 characters – but more on this limitation in a moment. EDIT: And this is without even considering the actual figures incolved, of which more added at the end.

Firstly, I don’t think anyone assumes that a retweet means total support of the original message. In fact, sometimes it’s intended as mockery! But if you quote figures, and someone asks you about them, it’s reasonable to justify or explain. I think. If it turns out they’re wrong, I’d see it as only fair to tweet a follow-up. Accountability, yes? Online we only have our reputation as currency. Challenging figures or opinions isn’t the same thing as an attempt to censor opinion, and for what it’s worth, I agree that if we only have exaggerated figures to use as propaganda we’ve got no chance. As I tweeted to @sidchip64, a ‘roar’ without anything to back it up is just bluster.

Secondly, I can just imagine Gove or his minions rubbing their hands together and laughing, watching those who teach fighting with each other instead of him. Dismissing a challenge from another teacher is rude. I expect my students to question what I say – often I demand it. But I expect better of any professional who works in a classroom. Solidarity means we work together to get it right, and that includes good statistics. It doesn’t mean we unquestioningly back a colleague who’s wrong.

Maybe it’s about a limited medium. I often find this on Twitter – great for tips, bad for clear ideas. Soundbites, not critical debate. So I suggested to @TeacherROAR that it wouldn’t be hard to clarify what they meant – and justify it – in a blog. For some reason this was seen as a demand and so I decided to do it myself. Half an hour later, here we are. I feel better for it, anyway.

So what I didn’t include last night – and, believe it or not, woke up thinking about at half-five this morning – is a point of view on the numbers. They got attention, obviously. That was the point. But I think it was poor of the Independent to quote from a report by the Sixth Form Colleges Association – a report I haven’t yet found, but that may be due to lack of caffeine – which makes a direct comparison between the annual funding for their students and that spent on setting up free schools this year.

Now, it would be fair to say that I’m very dubious about free schools, in particular the application and set up process. Laura McInerney explains these concerns much more eloquently and expertly than I could. But that doesn’t mean we should misuse data in this way. Making the last year’s nine free schools (some or all of the total?) and their current 1557 students liable for the entire cost of setting them up – when the assumption is that these costs would actually be spread over the foreseeable life of the schools – is wrong. If I can be forgiven a physics example, it’s like working out the kWh cost of electricity from a nuclear power station using all the commissioning and decommissioning costs but only a single year of electrical output.

Picking numbers out of the air, if each of those nine free schools costs £3m to run this year (which would make the set up costs £35m) then the cost per student comes to a little over £17000. If their costs are £2m annually, then the figure is £11500 or so. Now, these figures are still too high – but they’re more realistic, unless each of those schools is to shut down after a single year being open.

Yes, I agree that free schools haven’t always been set up where they’re actually needed, so you could argue the costs are wasted. Yes, I know that this year a lot has been spent, potentially to the detriment of sixth form colleges. But I’d be prepared to bet that back when the colleges were set up, some people claimed they were a waste of money. And I’m sure they were justified by looking at the benefits over time, not just costs in the first year. If we want to be taken seriously – and this goes back to my first point – then we must justify the numbers we use, or we are building our argument on very weak foundations.

A final quote, this time from much longer ago.

If we do not hang together, we shall surely hang separately.

Benjamin Franklin

Ofqual’s Absolute Error

In science lessons we teach students about the two main categories of error when taking readings. (And yes, I know that it’s a little more complicated than that.) We teach about random and systematic error.

Random errors are the ones due to inherently changing and unpredictable variables. They give readings which may be above or below the so-called ‘true value’. We can make allowances for them by repeating the reading, keeping all control variables the same, then finding a mean value. The larger the range, the bigger the potential random error – this is now described as the precision of the reading. I sometimes have my students plot this range as an error bar.

A systematic error is an artifact of the measuring system. It will be consistent, in direction and size (perhaps in proportion to the reading, rather than absolute). A common type is a ‘zero error’, where the measuring device does not start at zero so all readings are offset from the true value. We sometimes calibrate our readings to account for this.

You can consider spelling errors due to sloppy typing as being random, while persistently misspelling a particular word is systematic.

So what does this have to do with Ofqual?

The recent issues with the scoring of GCSE English coursework – discussed on twitter with the hashtag #gcsefiasco – are a good example of errors causing problems. But if we use the scientific approach to errors, it is much harder to blame teachers as Stacey has done.

Coursework is marked by teachers according to a markscheme, provided by the exam board. (It’s worth remembering that apart from multiple choice papers all external exams are marked in this way too.) An issue with controlled assessments is that teachers are unavoidably familiar with the marking guidelines, so can ensure students gain skills that should help them demonstrate their knowledge. This is after all the point of the classroom, to learn how it’s done. To complain that we ‘teach to the test’ is like criticising driving instructors for teaching teenagers how to drive on British roads.

Once the work of all students in a  cohort has been marked, the department will spend some time on ‘internal moderation’. This means checking a random sample, making sure everyone has marked in the same way, and to the standard specified by the markscheme. Once the school has committed to the accuracy of the marks, they are sent to the exam board who will specify a new random sample to be remarked externally. If the new scores match those awarded by the school, within a narrow tolerance, then all the scores are accepted. If not, then all will be adjusted, up or down, to correct for a systematic error by the department. There will still be a few random errors – deviations from the ‘correct’ score on specific essays – but these will be fairly rare.

The exam board then converts the coursework score, using a top secret table, into a percentage of the available marks. You may not need to get everything perfect to get an ‘effective’ 100% on the coursework element of the course. And dropping 2 of 50 on the raw score, as marked by the teachers, may mean more than a 4% decrease after conversion. This table will be different for different papers because some exams are harder than others, but changes should be minimal if we want to able to compare successive years.

So what happened last summer?

Students who had gained the same raw score on the same coursework task, which had been marked to the same standard as confirmed by the exam boards during external moderation, were awarded different percentages by the exam boards depending on when the work was sent in. This was after sustained pressure from Ofqual, possibly because using the same boundaries in June as they had in January would have resulted in ‘too many’ higher grades. This was not about a small number of random errors in marking. This was not about a systematic error by some or all schools, because the boards had procedures to identify that. This was about a failure by the exam boards and Ofqual to discreetly fix the results the way they intended to.

It is a basic principle in science that you cannot adjust your results based on what you want or expect them to be. You might be surprised, you might recheck your working, but you can’t change the numbers because of wishful thinking. If there was an error, it was by the exam boards and Ofqual, who showed that they could not specify what work was equivalent to a C grade.

The procedures were followed in schools. The exam boards agreed that the controlled assessments were marked to their own standards. And yet Ofqual still claim that it is the fault of us teachers, who prepared our students so well for the controlled assessment that we are being called cheats.

I’ve blogged before about the weaknesses built in to the science ISAs. The exam board and Ofqual are either too busy to read what one teacher has to say – perfectly reasonable – or don’t have an answer. I don’t understand how it is our fault when their system approved what teachers did and how they marked.

So maybe we shouldn’t be marking controlled assessments at all.

PS (This is the cue for the unions to step in. And they won’t. This is why we need one national professional body representing teachers, using evidence rather than political rhetoric.)