Measurable Outcomes

Following a conversation on twitter about the phonics screening test administered in primary school, I have a few thoughts about how it’s relevant to secondary science. First, a little context – especially for colleagues who have only the vaguest idea of what I’m talking about. I should point out that all I know about synthetic phonics comes from glancing at materials online and helping my own kids with reading.

Synthetic Phonics and the Screening Check

This is an approach to teaching reading which relies on breaking words down into parts. These parts and how they are pronounced follow rules; admittedly in English it’s probably less regular than many other languages! But the rules are useful enough to be a good stepping stone. So far, so good – that’s true of so many models I’m familiar with from the secondary science classroom.

The phonics screen is intended, on the face of it, to check if individual students are able to correctly follow these rules with a sequence of words. To ensure they are relying on the process, not their recall of familiar words, nonsense words are included. There are arguments that some students may try to ‘correct’ those to approximate something they recognise – the same way as I automatically read ‘int eh’ as ‘in the’ because I know it’s one of my characteristic typing mistakes. I’m staying away from those discussions – out of my area of competence! I’m more interested in the results.

Unusual Results

We’d expect most attributes to follow a predictable pattern over a population. Think about height in humans, or hair colour. There are many possibilities but some are more common than others. If the distribution isn’t smooth – and I’m sure there are many more scientific ways to describe it, but I’m using student language because of familiarity – then any thresholds are interesting by definition. They tell us, something interesting is happening here.

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny …”

Possibly Isaac Asimov. Or possibly not.

It turns out that with the phonics screen, there is indeed a threshold. And that threshold just so happens to be at the nominal ‘pass mark’. Funny coincidence, huh?

The esteemed Dorothy Bishop, better known to me and many others as @deevybee, has written about this several times. A very useful post from 2012 sums up the issue. I recommend you read that properly – and the follow-up in 2013, which showed the issue continued to be of concern – but I’ve summarised my own opinion below.

phonics plot 2013
D Bishop, used with permission.

Some kids were being given a score of 32 – just passing – than should have been. We can speculate on the reasons for this, but a few leading candidates are fairly obvious:

  • teachers don’t want pupils who they ‘know’ are generally good with phonics to fail by one mark on a bad day.
  • teachers ‘pre-test’ students and give extra support to those pupils who are just below the threshold – like C/D revision clubs at GCSE.
  • teachers know that the class results may have an impact on them or the school.

This last one is the issue I want to focus on. If the class or school results are used in any kind of judgment or comparison, inside or outside the school, then it is only sensible to recognise that human nature should be considered. And the pass rate is important. It might be factor when it comes time for internal roles. It might be relevant to performance management discussions and/or pay progression. (All 1% of it.)

“The teaching of phonics (letters and the sounds they make) has improved since the last inspection and, as a result, pupils’ achievement in the end of Year 1 phonics screening check has gradually risen.”

From an Ofsted report

Would the inspector in that case have been confident that the teaching of phonics had improved if the scores had not risen?

Assessment vs Accountability

The conclusion here is obvious, I think. Most of the assessment we do in school is intended to be used in two ways; formatively or summatively. We want to know what kids know so we can provide the right support for them to take the next step. And we want to know where that kid is, compared to some external standard or their peers.

Both of those have their place, of course. Effectively, we can think of these as tools for diagnosis. In some cases, literally that; I had a student whose written work varied greatly depending on where they sat. His writing was good, but words were spelt phonetically (or fonetically) if he was sat anywhere than the first two rows. It turned out he needed glasses for short-sightedness. The phonics screen is or was intended to flag up those students who might need extra support; further testing would then, I assume, suggest the reason for their difficulty and suggested routes for improvement.

If the scores are also being used as an accountability measure, then there is a pressure on teachers to minimise failure among their students. (This is not just seen in teaching; an example I’m familiar with is ambulance response times which I first read about in Dilnot and Blastland’s The Tiger That Isn’t, but issues have continued eg this from the Independent) Ideally, this would mean ensuring a high level of teaching and so high scores. But if a child has an unrecognised problem, it might not matter how well we teach them; they’re still going to struggle. It is only by the results telling us that – and in some cases, telling the parents reluctant to believe it – that we can help them find individual tactics which help.

And so teachers, reacting in a human way, sabotage the diagnosis of their students so as not to risk problems with accountability. Every time a HoD puts on revision classes, every time students were put in for resits because they were below a boundary, every time an ISA graph was handed back to a student with a post-it suggesting a ‘change’, every time their PSA mysteriously changed from an okay 4 to a full-marks 6, we did this. We may also have wanted the best for ‘our’ kids, even if they didn’t believe it! But think back to when league tables changed so BTecs weren’t accepted any more. Did the kids keep doing them or did it all change overnight?

And was that change for the kids?

Any testing which is high-stakes invites participants to try to influence results. It’s worth remembering that GCSE results are not just high-stakes for the students; they make a big difference to us as teachers, too! We are not neutral in this. We sometimes need to remember that.


With thanks to @oldandrewuk, @deevybee and @tom_hartley for the twitter discussion which informed and inspired this post. All arguments are mine, not theirs.

Advertisements

Responding to “Secret Origins”

This post is a duplicate of the comment I’ve just left on a post at Vince Ulam’s blog; it’s here because otherwise the time I spent on formatting and adding hotlinks was wasted.

“These useful idiots, grateful for the imagined recognition and eager to seem important in the eyes of their peers, promote the aims and ideas of their recruiters across social media and via ticketed salons.”

It must be really nice to see yourself as immune to all this, too smart to fall for the conspiracy that everyone else has been duped by. Because, whether you intended it or not, that’s how much of the original post comes across. I think this is what put my back up, to be honest. I’ve attended two ResearchED events, one of which I spoke at. I’d like to think I earned that, rather than being recruited as a useful idiot. But then, in your viewpoint, it’s only natural I’d fall for it: I’m not as clever as you. The contrary argument might be that you’re resentful of not having the opportunity or platform for your views, but I’ve no idea if you’ve applied to present at ResearchED or anything similar. So how about we look at the facts, rather than the inferences and assigned motives you write about?

ResearchED in Context

From a local teachmeet up to national events, the idea of ‘grassroots’ activism in teaching is a powerful one. As bloggers, we both believe that practitioners can influence the ideas and work of others. And yes, I agree that appearing practitioner- or public-led, but actually being influenced by specific political parties or organisations, would be appealing to those organisations. It would lend legitimacy to very specific ideas. You only have to look at the funding of patient organisations by pharmaceutical companies, or VoteLeave and allied groups, to see the issues. But there is surely a sliding scale of influence here.

How we assess the independence of such a grassroots organisation could be done in several ways. Do we look at where the money comes from? Do we examine the people involved in organising or leading it? Do we look at the decisions they make, and how they are aligned with other groups? Do we look at who chooses to be involved, and who is encouraged/dissuaded, subtly or otherwise?

In reality we should do all of those. I think my issue with your post is that you seem to be putting ResearchEd in the same category as the New Schools Network among other groups, and (on Twitter) to be adding in the Parents and Teachers for Excellence Campaign too. I see them as very separate cases, and I’m much less hesitant about ResearchEd – partly because the focus is teacher practice and engagement, not campaigning. And you raise Teach First, which I have my own concerns about and am leaving to one side now as it’s not relevant.

The New Schools Network is (mostly) funded by government, and many have written about the rather tangled set of circumstances which led to the funding and positions expressed being so closely tied to a policy from one political party. I must admit, I find myself very dubious about anything that Dominic Cumming has had a hand in! Their advocacy and support for free schools, with so far limited evidence that they provide good value for money, frustrates me.

The PTE Campaign is slightly different. I’ve not spent time on searching for funding information but remember from previous news items – this from Schools Week for example – that it lacks transparency, to say the least. I think the name is misleading and their claim to be about moving power away from ‘the elites in Westminister and Whitehall’ to be disingenuous.

And let’s not even start with Policy Exchange.

From where I sit, if you want to group ResearchED with other education organisations, a much better match would seem to be Northern Rocks. The focus is improving and sharing classroom pedagogy, rather than campaigning. They’re both run on a shoestring. Classroom teachers are keen on attending and praise what they get out of the sessions. I can’t find anything on your blog about Northern Rocks, but that could be simple geography. (The bitter part of me suggests it’s not the first time anything happening past Watford gets ignored…)

Back to ResearchED: Funding and Speakers

“We have to hand it to Tom Bennett for his truly amazing accomplishment of keeping his international ‘grassroots’ enterprise going for four years without producing any apparent profits.”

Maybe it’s me seeing something which isn’t there, but your post seems to imply that there must be some big funding secret that explains why ResearchED is still going. What do you think costs so much money? The speakers are volunteers, as are the conference helpers. I don’t know if Tom gets a salary, but considering how much time it must be taking it would seem reasonable for at least a few people to do so. The catering costs, including staffing, are covered by the ticket price. The venues I remember are schools, so that’s not expensive.

As you’ve raised on Twitter during our discussions, the question of transport for UK-based speakers to overseas venues is an interesting one. I know that when I presented at Oxford (the Maths/Science one), my employer covered my travel costs; I assume that was the same for all speakers, or they were self-funding. If you have other specific funding concerns, I’ve not seen you describe them; you can hardly blame me for focusing on this one if you’d rather make suggestive comments than ask proper questions. I would also like to know if speakers can access funding support and if so, how that is decided. I can’t find that information on the website, and I think it should be there. I disagree with lots of what you say – or I wouldn’t have written all this – but that loses legitimacy if I don’t say where we have common ground.

I was surprised to find out how many ResearchED conferences there had been; I was vaguely thinking of seven or eight, which is why I was surprised by your suggestion that David Didau had presented at least six times. I stand corrected, on both counts. Having looked at the site, I’m also surprised that there’s no clear record of all the events in one place. A bigger ask – and one I have addressed to one of the volunteers who I know relatively well – would be for a searchable spreadsheet of speaker info covering all the conferences.

That would be fascinating, wouldn’t it? It would let us see how many repeat speakers there are, and how concentrated the group is. My gut feeling is that most speakers, like me, have presented only once or twice. Researchers would probably have more to say. I’d love to see the gender balance, which subject specialisms are better represented, primary vs secondary numbers, the contrast between state and independent sector teachers, researcher vs teacher ratios…

I’m such a geek sometimes.

You tweeted a suggestion I should ignore my personal experience to focus on the points in your post. The thing is that my personal experience of – admittedly only two – ResearchED conferences is that any political discussion tends to happen over coffee and sandwiches, and there’s relatively little of that. Maybe there’s more at the ‘strategic’ sessions aimed at HTs and policy-makers, rather than the classroom and department methods that interest me. If there’s animosity, it’s more likely to be between practitioners and politicians, rather than along party lines. I suspect I have more in common, to be honest, with a teacher who votes Tory than a left-leaning MP without chalkface experience. It’s my personal experience that contradicts the suggestions in your post about ResearchED being part of a shadowy conspiracy to influence education policy debate.

To return to Ben Goldacre, featured in your post as a victim of the puppet-masters who wanted a good brand to hide their dastardly plans behind: his own words suggest that in the interests of improving the evidence-base of policy, he’s content to work with politicians. Many strong views have been expressed at ResearchED. With such a wide variety of speakers, with different political and pedagogical viewpoints, I’m sure you can find some presentations and quotes that politicians would jump on with glee. And I’m equally sure that there are plenty they ignore, politely or otherwise. But I don’t believe the speakers are pre-screened for a particular message – beyond “looking at evidence in some way is useful for better education.” To be honest, I’m in favour of that – aren’t you? If there’s other bias in speaker selection, it was too subtle for me to notice.

But then, I’m not as clever as you.

Data, Bias and Poisoning the Well

Dear Reader, I did it again.

I could say that I’m blogging this because it could be used in the classroom. (It could, as a discussion about using data in context.) I could justify it with the fact that I’ve recommended books by the scientist-communicator in question. (And will again, because they’re ace.) I could talk about the challenges of the inevitable religious questions in a science lab, which we’ve all faced. (Like the year 10 who honestly believed, as he’d been told, that human bodies were literally made of clay like his holy book said.)

But the truth is I got annoyed on Twitter, got into a bit of a discussion about it, and don’t want to give up without making the points more clearly. So if you’re not up for a bit of a rant, come back when I’ve finally sorted out the write-up from the #ASEConf in Sheffield.

(I should point out that family stuff is a bit tricky at the moment, due to my Dad breaking his brand-new, freshly-installed hip. Before he’d even left the ward. So it’s possible that I’m procrastinating before lots of difficult decisions and a long journey to the Wild South.)

Appropriate Context?

A PR write-up of an academic study has been shared by several people online. The tweet I saw was from @oldandrewuk, who I presume shared direct from the page or RSS as it used the headline from there.

I responded pointing out the source of the research funding, the Templeton Foundation, which was founded to promote and investigate religious viewpoints. He suggested I was ‘poisoning the well’, a phrase I vaguely recognised but to my shame couldn’t pin down.

a fallacy where irrelevant adverse information about a target is preemptively presented to an audience, with the intention of discrediting or ridiculing everything that the target person is about to say. (Wikipedia)

I agree that this was preemptive, but would challenge the judgment that the information is irrelevant. The Templeton Foundation has a history of selectively funding and reporting research to fulfil their aim of promoting religious viewpoints. I thought of this information as providing valuable context; the analogy I used later in discussion was that of tobacco companies funding research showing limited effects of plain packaging. This was fresh in my mind due to recent discussions with another tweeter, outside of education circles. So when does providing context become a form of introducing bias? An interesting question.

Correlation and Causation?

Another point I made was that the data shared in the press release (although not in the abstract) seemed to hint at a correlation between the respondents’ religious views and their criticism of Richard Dawkins. It’s not unreasonable to suggest that this might be causative. The numbers, extracted:

  • 1581 UK scientists responded to the survey (if answers here mentioned Dawkins it’s not referenced annywhere I can see)
  • 137 had in-depth interviews
  • Of these, 48 mentioned RD during answers to more general questions*
  • Of these 48, 10 were positive and 38 negative

*Before I look at those numbers in a little more detail, I’d like to point out: at no time were the scientists asked directly their view on Richard Dawkins. The 89 who didn’t mention him might have been huge fans or his archenemies. They might never have heard of him. To be fair, in the paper some follow-up work about ‘celebrity scientists’ is suggested. But I’d love to have seen data from a questionnaire on this specific point addressed to all of the scientists.

Of the 48 who mentioned him:

rd-numbers

I suggested that the apparent link had been glossed over in the press release. That not a single scientist identified as positive had been positive about his work stood out for me. I wasn’t surprised that even non-religious scientists had identified problems; he is, let’s face it, a polarising character! But the balance was interesting, particularly as a ratio of one third of respondents being religious seeming a higher proportion that I remembered for UK scientists. But the makeup of the 137, in terms of religious belief vs non, wasn’t in the available information.

The Bigger Picture

I wanted more information, but the paper wasn’t available. Thankfully, #icanhazpdf came to my rescue. I had a hypothesis I wanted to test.

And so more information magically made its way into my inbox. I have those numbers, and it turns out I was right. It’s not made perfectly clear, perhaps because the religious orientation or lack thereof is the focus of other papers by the authors. But the numbers are there.

According to the paper, 27% of UK scientists surveyed are religious (from ‘slightly’ to ‘very’). It doesn’t make clear whether this is based on the questionnaire or applies specifically to the 137 interviewed. (EDIT: I’ve reached out to the authors and they weren’t able to clarify.) 27% of the 137 gives 37 who are religious, and therefore exactly 100 who are not. These numbers are used as I’ve nothing better, but I’ve labelled them ‘inferred’ below.

Now, there are loads of ways to interpret these numbers. I’m sure I’ve not done it in the best way. But I’ve had a go.

rd-data-2

What stands out for me is that religious scientists make up just over a quarter of those in the sample, but well over a third of those critical of Dawkins’ approach to public engagement. What’s clearer from this table is that the religious scientists were more likely to mention him in the first place, and as pointed out earlier these mentions were all negative. Is the difference significant?

  • 15 of 37 religious respondents were negative: 41%
  • 23 of 100 non-religious respondents were negative: 23%

I can’t help but think that’s a significant – although perhaps unsurprising – difference. Religious respondents were nearly twice as likely to be negative. So my hypothesis is supported by this data; the religious are over-represented in those who mentioned Dawkins during their answers. I’m surprised that this correlation escaped the Templeton-funded researchers. An equally correct headline would have been:

Scientists identifying as religious twice as likely to criticise Richard Dawkins’ approach to engagement unprompted.

Conclusions

I think in a lot of ways the numbers here aren’t the big story. I don’t think any of them are particularly surprising. I don’t have any answers for myself about the difference between providing necessary and important context, and ‘poisoning the well’ as @oldandrewuk put it. But I do have two insights that are important to me, if nobody else.

  1. The headline used in the article press-release is subtly misleading. “Most British scientists cited in study feel Richard Dawkins’ work misrepresents science.” My italics highlight the problem; 38 who were negative is not a majority of the 137 interviewed.
  2. The data used was selected to show one particular aspect of the results, and arguably some links were not explored despite being of interest. This can never be as good as a study designed to test one particular question. Only by closely reading the information was it clear how the judgments were made by the researchers.

I’d like to highlight that, as seemed fair to me, I invited @oldandrew to comment here following our discussion on twitter. He has so far chosen not to do so.

Conflicts of Interest

To be transparent, I should point out for anyone who doesn’t realise that I’m an atheist (and humanist, and secular). I often also disagree with Dawkins’ communications work – in fact, if they’d asked me the same questions there’s a fair chance I would have made the point about him causing difficulties for the representation of science to non-scientists – but that’s why I recommend his science books specifically!

Links

The wonderful @evolutionistrue posted about this research too. As a contrast, have a look at how EvangelismFocus wrote it up.

You’re Welcome, Cambridge Assessment

It’s not often I can claim to be ahead of the trend. Pretty much never, to be honest. But this time I think I’ve managed it, and so I’m going to make sure all my readers, at least, know about it.

Recently the TES “exclusively reported” – which means other sites paraphrased their story and mentioned their name, but didn’t link – that Cambridge Assessment was considering ‘crowd-sourcing’ exam questions. This would involve teachers sending in possible questions which would then be reviewed and potentially used in external exams. Surplus questions would make up a large ‘question bank’.

I suggested this. This is, in fact, pretty much entirely my idea. I blogged ‘A New Exam Board’ in early 2012 suggesting teachers contribute questions which could then provide a range of sample papers as well as external exams. So it is not, despite what Tim Oates claims, a “very new idea.” Despite the similarity to my original post I do, however, have some concerns.

Backwards Backwards Design

So instead of teachers basing their classroom activities on giving kids the skills and knowledge they need to attempt exam questions, we’re doing it the other way around? As I’ve written before, it’s not necessarily a bad thing to ‘teach to the test’ – if the test is a good one. Writing exam questions and playing examiner is a valuable exercise, both for teachers and students, but the questions that result aren’t always helpful in themselves. As my OT-trained partner would remind me: “It’s the process, not the product.”

Credit

Being an examiner is something that looks good on a CV. It shows you take qualifications seriously and have useful experience. How can teachers verify the work they put into this? How can employers distinguish between teachers who sent in one dodgy question and those who shared a complete list, meticulously checked and cross-referenced? What happens when two or more teachers send in functionally identical questions?

Payment

A related but not identical point. How is the time teachers spend on this going to be recognized financially? And should it be the teacher, or the school? Unless they are paid, teachers are effectively volunteering their time and professional expertise, while Cambridge Assessment will continue to pay their permanent and contract staff. (I wonder how they feel about their work being outsourced to volunteers…)

Quality

It’s hardly surprising at this early stage that the details aren’t clear. One thing I’m interested in is whether the submissions shared as part of the ‘questions bank’ will go through the same quality control process as those used in the exams. If so, it will involve time and therefore money for Cambridge Assessment. If not, it risks giving false impressions to students who use the bank. And there’s nothing in the articles so far to say whether the bank of questions will be free to access or part of a paid product offered.

Student Advantage

Unless there are far fewer ‘donated’ questions than I’d expect, I don’t think we will really see a huge advantage held by students whose teachers contributed a question. But students are remarkably sensitive to the claims made by teachers about “there’s always a question on x” or “it wasn’t on last year’s paper, so expect y topic to come up”. So it will be interesting to see how they respond to their teachers contributing tot he exam they’ll be sitting.

You’re Welcome

I look forward to hearing from Cambridge Assessment, thanking me for the idea in the first place…

 

Unspecifications

I’m really starting to get annoyed with this, and I’m not even in the classroom full-time. I know that many colleagues – @A_Weatherall and @hrogerson on Staffrm for example – are also irritated. But I needed to vent anyway. It’ll make me feel better.

EDIT: after discussion on Twitter – with Chemistry teachers, FWIW – I’ve decided it might help to emphasise that my statements below are based on looking at the Physics specification. I’d be really interested with viewpoints from those who focus on teaching Biology and Chemistry, as well as those with opinions on whether I’ve accurately summed up the situation with Physics content or overreacted.

The current GCSE Science specifications are due to expire soon, to be replaced by a new version. To fit in with decisions by the Department for Education, there are certain changes to what we’ve been used to. Many others have debated these changes, and in my opinion they’re not necessarily negative when viewed objectively. Rather than get into that argument, I’ll just sum them up:

  1. Terminal exams at the end of year 11
  2. A different form of indirect practical skills assessment (note that ISAs and similar didn’t directly assess practical skills either)
  3. More content (100+ pages compared to the previous 70ish for AQA)
  4. Grades 9-1 rather than A*-G, with more discrimination planned for the top end (and, although not publicised, less discrimination between weaker students)

Now, like many other subjects, the accreditation process seems to be taking longer than is reasonable. It also feels, from  the classroom end, that there’s not a great deal of information about the process, including dates. The examples I’m going to use are for AQA, as that’s the specification I’m familiar with. At least partly that’s because I’m doing some freelance resource work and it’s matched to the AQA spec.

Many schools now teach GCSE Science over more than two years. More content is one of several reasons why that’s appealing; the lack of an external KS3 assessment removes the pressure for an artificial split in content. Even if the ‘official’ teaching of GCSE starts in Year 10, the content will obviously inform year 9 provision, especially with things like language used, maths familiarity and so on.

Many schools have been teaching students from a the first draft specification since last September. The exam boards are now working on version three.

The lack of exemplar material, in particular questions, mean it is very hard for schools to gauge likely tiers and content demand for ‘borderline’ students. Traditionally, this was the C-D threshold and I’m one of many who recognized the pressure this placed on schools with league tables, with teachers being pushed much harder to help kids move from a D to a C grade than C to B. the comparison is (deliberately) not direct. As I understand it an ‘old’ middle grade C is now likely to be a level 4, below the ‘good pass’ of a level 5.

Most schools start to set for GCSE groups long before the end of Year 9. Uncertainties about the grade implications will only make this harder.

The increased content has three major consequences for schools. The first is the teaching time needed as mentioned above. The second is CPD; non-specialists in particular are understandably nervous about teaching content at GCSE which until now was limited to A-level. This is my day-job and it’s frustrating not to be able to give good guidance about exams, even if I’m confident about the pedagogy. (For Physics: latent heat, equation for energy stored in a stretched spring, electric fields, pressure relationships in gases, scale drawings for resultant forces, v2 = u2 -2as, magnetic flux density.) The last is the need for extra equipment, especially for those schools which don’t teach A-level Physics, with the extra worry about required practicals.

Even if teachers won’t be delivering the new specification until September, they need to familiarize themselves with it now. Departments need to order equipment at a time of shrinking budgets.

I’m not going to suggest that a new textbook can solve everything, but they can be useful. Many schools have hung on in the last few years as they knew the change in specification was coming – and they’ve been buying A-level textbooks for that change! New textbooks can’t be written quickly. Proofreading, publishing, printing, delivery all take time. This is particularly challenging when new styles of question are involved, or a big change such as the new language for energy changes. Books are expensive and so schools want to be able to make a good choice. Matching textbooks to existing resources, online and paper-based, isn’t necessarily fast.

Schools need time to co-ordinate existing teaching resources, samples of new textbooks and online packages to ensure they meet student needs and cost limitations.

Finally, many teachers feel they are being kept in the dark. The first specification wasn’t accredited, so exam boards worked on a second. For AQA, this was submitted to Ofqual in December (I think) but not made available on the website. Earlier this month, Ofqual chose not to accredit this version, but gave no public explanation of why. Teachers are left to rely on individual advisers, hearsay and twitter gossip. This information would have given teachers an idea of what was safe to rely on and what was likely to change. It took several weeks for the new submission dates to appear on the website – now  mid-March – and according to Ofqual it can take eight weeks from submission to accreditation.

If these time estimates are correct, the new AQA specification may not be accredited until mid-May and as yet there is nothing on record about what was wrong with previous versions. Teachers feel they are being left in the dark yet will be blamed when they don’t have time to prepare for students in September

I think that says it all.

Lies, Damned Lies and Christian Statistics

I’m a science teacher. When talking about the characteristics of sound in my lessons, I encourage students to give detail. It’s not enough to say that a change causes ‘more vibrations’. If the sound is a higher pitch, the vibrations of the ear drum will be faster, or more frequent. If the sound is louder, the displacement of the ear drum is bigger; we say the vibrations have greater amplitude or more energy. So it’s not that the ‘more vibrations’ answer is wrong – just incomplete. If we don’t give a full answer it can be misunderstood.

So I was catching up with news and read an article on the BBC about the continued arguments about institutionalized discrimination and hate speech in the Anglican church. Now, this isn’t about Welby being sorry for the discrimination – just not sorry enough to stand against it – or the hypocrisy of them sending out advice to schools on homophobic bullying. Instead, it’s simply about a number in the report.

_87734930_topanglicancountries

I teach my students to do a ‘common sense check’ as part of any calculation and I was bemused that the BBC didn’t appear to have thought this through. Since when was a third of the UK Anglican? Now, I understand that calculating exactly how many (Anglican) Christians in the UK might be tricky, but 26 million seemed too far off to be reasonable. So I did some digging myself, and asked the organisation behind the ‘World Christian Database’ for the source of this number. It’s important to note that on Twitter they were very definite it was an aggregate figure and they used many sources of data.

 

So how should we find out how many (Anglican) Christians there are in the UK?

Simple, isn’t it? Pop into your local church on Sunday morning and count heads. But which Sunday? What about parishioners who are too ill to make it in, or are shift-workers? Would a Christmas or Easter service be more meaningful? And surely some believers prefer to worship in other ways. So church attendance figures, although useful, can probably be considered a lower limit. The Statistics for Mission 2014 (pdf) figures are just under a million for average Sunday attendance during October, with significantly higher numbers for Easter and Christmas services.

Church Attendance: 0.98m (980000)
Christmas Services: 2.4m

There’s been lots of arguments about the census question, starting with the fact that it assumes the respondent will have a religion in the first place. The cultural identity part of this is recognized within the Census analysis, as the quote below demonstrates:
The question (‘What is your religion?’) asks about religious affiliation, that is how we connect or identify with a religion, irrespective of actual practice or belief.

According to the last Census figures, England and Wales has 33m Christians, but this isn’t broken down into denominations. Most data I’ve found suggests around half of UK Christians consider themselves Anglican, so we can get a reasonable estimate.

Census Anglicans: 17m (approx)

Many surveys call this number into question, for example this report discussing data that only 30% of Britons consider themselves religious at all. As a contrast, the British Social Attitudes Survey asks a range of questions of a randomly selected sample (around 3000 people), including their religion and religious upbringing. The last dataset suggests 17% of the population describes themselves as Anglican, a significant drop.
Self-described: 8.5m (from BSAS)
Of course, if we wanted to simply collect data on the number of people who had been baptized, this would be easier. The agreed estimate – which send to have been used for not just years but decades – is 26m. I’d be very interested to know how this value hasn’t changed; surely infant baptisms and deaths of those baptized can’t have coincidentally been in balance for all this time?
Baptized Anglicans: 26m
Most of these are, naturally, infant baptisms – which brings me to an important and obvious point. I was baptized. But like many others, the fact of my baptism is completely irrelevant to my (lack of) belief. This number includes me – and if you were baptized, it includes you too. (Some non-believers, starting with John Hunt in 2009, are trying to do something about this.) So using this figure, while ignoring all the other values, seems disingenuous to say the least and knowingly dishonest at the most. It’s like the TK maxx adverts, ‘always up to 60% off’. It could mean 59% off. It could mean 1% off. That there are apparently 26 million people baptized as Anglican in the UK is a meaningless figure without the context – which significantly undermines any argument based upon it.

Might it be reasonable, I wonder, to suggest that claiming 26m Anglicans in the UK is bearing false witness?

 

Why Teach?

EDIT: please note I do not endorse, support or recommend the Central College for Education, a fee-paying distance learning institution. If considering a career in teaching, I recommend you contact university education departments who will advise about the best route for you.
I miss teaching kids.
Don”t get me wrong, I’m really enjoying my current day job, working as a TLC with the Stimulating Physics Network. I work with a dozen schools to develop physics teaching, as well as early career teachers; the adults are, on the whole, more focused and motivated than year 9. I get time to perfect the demonstrations, and I can log CPD time towards my (part-time) working week. I get a lot more time with my family, from the eleven-year-old currently being home-schooled (long story) to the toddler who thinks sleep is for wimps. I can fit in a little freelance work here and there. (I have room for more. Email me.)
But it’s not the same.
The days are more predictable, even though I don’t have a timetable as such. Colleagues get excited about physics practicals, yes, but it’s not the same as the look on a kid’s face when they hear a slinky for the first time. (You can do something similar with a fork.) Digressions happen, but you don’t get to help a students realize how science matters to their life, hobbies, pets or sports. Even attentive teachers – which on a dark evening after a long day is a big ask – can’t measure up to a class of thirty seeing you put out a candle with carbon dioxide, or suddenly silent teens passing around a flint spearpoint made by their ancestor, 300 generations back.
So Alom’s post asking “Why teach when you can tutor?” was an interesting read. I’ve tutored too – although not at London prices – and it’s rewarding, but nothing like being in front of a class. It’s a conversation, not a performance. It’s tiring in a very different way. In the best lessons, what you do seems effortless to the kids. All the hard work, like a swan on a lake, is below the surface. Part of the ‘flow’ is that it looks easy. Maybe that’s why so many non-teachers think they’re entitled to express an opinion about the classroom? At the moment I’m working with adults for my day job and volunteering as a Cub leader. But they enjoyed their Science badge, which is something…
There’s a ‘buzz’ about a good lesson that makes up for a lot of the grief. No teacher goes into the profession wanting to do paperwork and fill out spreadsheets of targets. I’ve yet to meet a teacher who likes marking. Appreciates the need, yes. Enjoys sharing feedback with students and seeing them take it on board, absolutely. The long holidays are good, even if we pay for them in blood sweat and tears during term-time. But they’re a perk, not the purpose.
Kids ask great questions. They get excited about cool things, because they’ve not learned to fake cynicism. At least some of them will find you at break with yet more questions, or an empty chrysalis they found at the weekend, or to borrow books. They’ll act shocked when you say they can use your first name on Duke of Edinburgh’s Award expeditions, because “I’m a volunteer youth leader at the weekend, not your teacher.”
They’ll hate you, sometimes. They resist, and they fight. We don’t get it right every time, and not every student will be a success story in your lessons. Those are the ones where you look really hard for something real to praise them on, whether it’s their sports performance or how their English teacher was raving about their poetry. (If you can link it to science, even better – I had one student who applied her choreography skills to remember the different ‘types’ of energy.) But because you see them on the corridor you can thank them for holding a door, or show them in other tiny ways that you’re still both members of a school community.
The real question – the one which teachers, school leaders, governors and politicians need to answer – is “Why tutor when you could teach?” Some of the reasons might be individual, family commitments or ill-health for example. But if we’re going to keep recruiting and keeping classroom teachers, we need to be able to give good reasons. The draw of the classroom must outweigh the benefits of tutoring. For many, the good things about being in a school aren’t enough to make up for the disadvantages. Only by being honest about those reasons, and being committed to changing them, will we make the classroom a more attractive place for all of our colleagues.