Measurable Outcomes

Following a conversation on twitter about the phonics screening test administered in primary school, I have a few thoughts about how it’s relevant to secondary science. First, a little context – especially for colleagues who have only the vaguest idea of what I’m talking about. I should point out that all I know about synthetic phonics comes from glancing at materials online and helping my own kids with reading.

Synthetic Phonics and the Screening Check

This is an approach to teaching reading which relies on breaking words down into parts. These parts and how they are pronounced follow rules; admittedly in English it’s probably less regular than many other languages! But the rules are useful enough to be a good stepping stone. So far, so good – that’s true of so many models I’m familiar with from the secondary science classroom.

The phonics screen is intended, on the face of it, to check if individual students are able to correctly follow these rules with a sequence of words. To ensure they are relying on the process, not their recall of familiar words, nonsense words are included. There are arguments that some students may try to ‘correct’ those to approximate something they recognise – the same way as I automatically read ‘int eh’ as ‘in the’ because I know it’s one of my characteristic typing mistakes. I’m staying away from those discussions – out of my area of competence! I’m more interested in the results.

Unusual Results

We’d expect most attributes to follow a predictable pattern over a population. Think about height in humans, or hair colour. There are many possibilities but some are more common than others. If the distribution isn’t smooth – and I’m sure there are many more scientific ways to describe it, but I’m using student language because of familiarity – then any thresholds are interesting by definition. They tell us, something interesting is happening here.

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny …”

Possibly Isaac Asimov. Or possibly not.

It turns out that with the phonics screen, there is indeed a threshold. And that threshold just so happens to be at the nominal ‘pass mark’. Funny coincidence, huh?

The esteemed Dorothy Bishop, better known to me and many others as @deevybee, has written about this several times. A very useful post from 2012 sums up the issue. I recommend you read that properly – and the follow-up in 2013, which showed the issue continued to be of concern – but I’ve summarised my own opinion below.

phonics plot 2013
D Bishop, used with permission.

Some kids were being given a score of 32 – just passing – than should have been. We can speculate on the reasons for this, but a few leading candidates are fairly obvious:

  • teachers don’t want pupils who they ‘know’ are generally good with phonics to fail by one mark on a bad day.
  • teachers ‘pre-test’ students and give extra support to those pupils who are just below the threshold – like C/D revision clubs at GCSE.
  • teachers know that the class results may have an impact on them or the school.

This last one is the issue I want to focus on. If the class or school results are used in any kind of judgment or comparison, inside or outside the school, then it is only sensible to recognise that human nature should be considered. And the pass rate is important. It might be factor when it comes time for internal roles. It might be relevant to performance management discussions and/or pay progression. (All 1% of it.)

“The teaching of phonics (letters and the sounds they make) has improved since the last inspection and, as a result, pupils’ achievement in the end of Year 1 phonics screening check has gradually risen.”

From an Ofsted report

Would the inspector in that case have been confident that the teaching of phonics had improved if the scores had not risen?

Assessment vs Accountability

The conclusion here is obvious, I think. Most of the assessment we do in school is intended to be used in two ways; formatively or summatively. We want to know what kids know so we can provide the right support for them to take the next step. And we want to know where that kid is, compared to some external standard or their peers.

Both of those have their place, of course. Effectively, we can think of these as tools for diagnosis. In some cases, literally that; I had a student whose written work varied greatly depending on where they sat. His writing was good, but words were spelt phonetically (or fonetically) if he was sat anywhere than the first two rows. It turned out he needed glasses for short-sightedness. The phonics screen is or was intended to flag up those students who might need extra support; further testing would then, I assume, suggest the reason for their difficulty and suggested routes for improvement.

If the scores are also being used as an accountability measure, then there is a pressure on teachers to minimise failure among their students. (This is not just seen in teaching; an example I’m familiar with is ambulance response times which I first read about in Dilnot and Blastland’s The Tiger That Isn’t, but issues have continued eg this from the Independent) Ideally, this would mean ensuring a high level of teaching and so high scores. But if a child has an unrecognised problem, it might not matter how well we teach them; they’re still going to struggle. It is only by the results telling us that – and in some cases, telling the parents reluctant to believe it – that we can help them find individual tactics which help.

And so teachers, reacting in a human way, sabotage the diagnosis of their students so as not to risk problems with accountability. Every time a HoD puts on revision classes, every time students were put in for resits because they were below a boundary, every time an ISA graph was handed back to a student with a post-it suggesting a ‘change’, every time their PSA mysteriously changed from an okay 4 to a full-marks 6, we did this. We may also have wanted the best for ‘our’ kids, even if they didn’t believe it! But think back to when league tables changed so BTecs weren’t accepted any more. Did the kids keep doing them or did it all change overnight?

And was that change for the kids?

Any testing which is high-stakes invites participants to try to influence results. It’s worth remembering that GCSE results are not just high-stakes for the students; they make a big difference to us as teachers, too! We are not neutral in this. We sometimes need to remember that.


With thanks to @oldandrewuk, @deevybee and @tom_hartley for the twitter discussion which informed and inspired this post. All arguments are mine, not theirs.

You’re Welcome, Cambridge Assessment

It’s not often I can claim to be ahead of the trend. Pretty much never, to be honest. But this time I think I’ve managed it, and so I’m going to make sure all my readers, at least, know about it.

Recently the TES “exclusively reported” – which means other sites paraphrased their story and mentioned their name, but didn’t link – that Cambridge Assessment was considering ‘crowd-sourcing’ exam questions. This would involve teachers sending in possible questions which would then be reviewed and potentially used in external exams. Surplus questions would make up a large ‘question bank’.

I suggested this. This is, in fact, pretty much entirely my idea. I blogged ‘A New Exam Board’ in early 2012 suggesting teachers contribute questions which could then provide a range of sample papers as well as external exams. So it is not, despite what Tim Oates claims, a “very new idea.” Despite the similarity to my original post I do, however, have some concerns.

Backwards Backwards Design

So instead of teachers basing their classroom activities on giving kids the skills and knowledge they need to attempt exam questions, we’re doing it the other way around? As I’ve written before, it’s not necessarily a bad thing to ‘teach to the test’ – if the test is a good one. Writing exam questions and playing examiner is a valuable exercise, both for teachers and students, but the questions that result aren’t always helpful in themselves. As my OT-trained partner would remind me: “It’s the process, not the product.”

Credit

Being an examiner is something that looks good on a CV. It shows you take qualifications seriously and have useful experience. How can teachers verify the work they put into this? How can employers distinguish between teachers who sent in one dodgy question and those who shared a complete list, meticulously checked and cross-referenced? What happens when two or more teachers send in functionally identical questions?

Payment

A related but not identical point. How is the time teachers spend on this going to be recognized financially? And should it be the teacher, or the school? Unless they are paid, teachers are effectively volunteering their time and professional expertise, while Cambridge Assessment will continue to pay their permanent and contract staff. (I wonder how they feel about their work being outsourced to volunteers…)

Quality

It’s hardly surprising at this early stage that the details aren’t clear. One thing I’m interested in is whether the submissions shared as part of the ‘questions bank’ will go through the same quality control process as those used in the exams. If so, it will involve time and therefore money for Cambridge Assessment. If not, it risks giving false impressions to students who use the bank. And there’s nothing in the articles so far to say whether the bank of questions will be free to access or part of a paid product offered.

Student Advantage

Unless there are far fewer ‘donated’ questions than I’d expect, I don’t think we will really see a huge advantage held by students whose teachers contributed a question. But students are remarkably sensitive to the claims made by teachers about “there’s always a question on x” or “it wasn’t on last year’s paper, so expect y topic to come up”. So it will be interesting to see how they respond to their teachers contributing tot he exam they’ll be sitting.

You’re Welcome

I look forward to hearing from Cambridge Assessment, thanking me for the idea in the first place…

 

Unspecifications

I’m really starting to get annoyed with this, and I’m not even in the classroom full-time. I know that many colleagues – @A_Weatherall and @hrogerson on Staffrm for example – are also irritated. But I needed to vent anyway. It’ll make me feel better.

EDIT: after discussion on Twitter – with Chemistry teachers, FWIW – I’ve decided it might help to emphasise that my statements below are based on looking at the Physics specification. I’d be really interested with viewpoints from those who focus on teaching Biology and Chemistry, as well as those with opinions on whether I’ve accurately summed up the situation with Physics content or overreacted.

The current GCSE Science specifications are due to expire soon, to be replaced by a new version. To fit in with decisions by the Department for Education, there are certain changes to what we’ve been used to. Many others have debated these changes, and in my opinion they’re not necessarily negative when viewed objectively. Rather than get into that argument, I’ll just sum them up:

  1. Terminal exams at the end of year 11
  2. A different form of indirect practical skills assessment (note that ISAs and similar didn’t directly assess practical skills either)
  3. More content (100+ pages compared to the previous 70ish for AQA)
  4. Grades 9-1 rather than A*-G, with more discrimination planned for the top end (and, although not publicised, less discrimination between weaker students)

Now, like many other subjects, the accreditation process seems to be taking longer than is reasonable. It also feels, from  the classroom end, that there’s not a great deal of information about the process, including dates. The examples I’m going to use are for AQA, as that’s the specification I’m familiar with. At least partly that’s because I’m doing some freelance resource work and it’s matched to the AQA spec.

Many schools now teach GCSE Science over more than two years. More content is one of several reasons why that’s appealing; the lack of an external KS3 assessment removes the pressure for an artificial split in content. Even if the ‘official’ teaching of GCSE starts in Year 10, the content will obviously inform year 9 provision, especially with things like language used, maths familiarity and so on.

Many schools have been teaching students from a the first draft specification since last September. The exam boards are now working on version three.

The lack of exemplar material, in particular questions, mean it is very hard for schools to gauge likely tiers and content demand for ‘borderline’ students. Traditionally, this was the C-D threshold and I’m one of many who recognized the pressure this placed on schools with league tables, with teachers being pushed much harder to help kids move from a D to a C grade than C to B. the comparison is (deliberately) not direct. As I understand it an ‘old’ middle grade C is now likely to be a level 4, below the ‘good pass’ of a level 5.

Most schools start to set for GCSE groups long before the end of Year 9. Uncertainties about the grade implications will only make this harder.

The increased content has three major consequences for schools. The first is the teaching time needed as mentioned above. The second is CPD; non-specialists in particular are understandably nervous about teaching content at GCSE which until now was limited to A-level. This is my day-job and it’s frustrating not to be able to give good guidance about exams, even if I’m confident about the pedagogy. (For Physics: latent heat, equation for energy stored in a stretched spring, electric fields, pressure relationships in gases, scale drawings for resultant forces, v2 = u2 -2as, magnetic flux density.) The last is the need for extra equipment, especially for those schools which don’t teach A-level Physics, with the extra worry about required practicals.

Even if teachers won’t be delivering the new specification until September, they need to familiarize themselves with it now. Departments need to order equipment at a time of shrinking budgets.

I’m not going to suggest that a new textbook can solve everything, but they can be useful. Many schools have hung on in the last few years as they knew the change in specification was coming – and they’ve been buying A-level textbooks for that change! New textbooks can’t be written quickly. Proofreading, publishing, printing, delivery all take time. This is particularly challenging when new styles of question are involved, or a big change such as the new language for energy changes. Books are expensive and so schools want to be able to make a good choice. Matching textbooks to existing resources, online and paper-based, isn’t necessarily fast.

Schools need time to co-ordinate existing teaching resources, samples of new textbooks and online packages to ensure they meet student needs and cost limitations.

Finally, many teachers feel they are being kept in the dark. The first specification wasn’t accredited, so exam boards worked on a second. For AQA, this was submitted to Ofqual in December (I think) but not made available on the website. Earlier this month, Ofqual chose not to accredit this version, but gave no public explanation of why. Teachers are left to rely on individual advisers, hearsay and twitter gossip. This information would have given teachers an idea of what was safe to rely on and what was likely to change. It took several weeks for the new submission dates to appear on the website – now  mid-March – and according to Ofqual it can take eight weeks from submission to accreditation.

If these time estimates are correct, the new AQA specification may not be accredited until mid-May and as yet there is nothing on record about what was wrong with previous versions. Teachers feel they are being left in the dark yet will be blamed when they don’t have time to prepare for students in September

I think that says it all.

Square Pegs and Round Holes 1/2

My son is a keen and able reader. Not quite ten, he read and enjoyed The Hobbit earlier this year. He likes both Harry Potter and Alex Rider. David Walliams‘ books are now ‘too young for him’ and he’s a big fan of variations on classic myths and fairy tales – The Sisters Grimm and Percy Jackson, for example. He was a ‘free reader’ most of last year and continues to make progress when tested in school, in both reading and writing.

He’s now back on the reading scheme – level 17 Oxford. According to the official website of the series, these books are at a lower level than the reading age as assessed by the school last year of 11 years, 9 months. They’re short, mainly dull, and despite the claim of his teacher that he needs to be reading a wider variety the school stock are almost all adapted classics. Jane Eyre and Silas Mariner for a ten year old boy? Really?

We’ve got a good range at home, and he’s reading these in between finishing off the official school books (which he manages in less than an hour, but can’t change more than a couple of times a week). It’s not stopping him from reading. But I hate that for the first time in ages, my son sees reading as a chore.

You can probably tell I’m a little annoyed about all this.

Reasons and Excuses

I’m pretty sure that there are two reasons his school are being so inflexible. Firstly it’s a new scheme, a new teacher and they’ve got a lot on at this time of year. Only two kids – the other a year older – are on this level in the school. The scheme and approach probably work fine with everyone else, and adapting it to one student is a big time commitment. I understand that. I really do.

The other is about assessment. We’d assumed that the only way he can be assessed (via the Suffolk reading scale, apparently) is by reading the books that match it. We’re now not sure that’s right. The school have chosen an assessment strategy which doesn’t cater for the highest ability. It will be interesting to see how they try to show progress, seeing as these are too easy for him.

I think they didn’t believe at first how quickly he was reading them. When he demonstrated that he had understood, retained and could explain the books verbally, they tried to slow him down. “Write a review.” “Discuss it with your parents so they can write in your record.” And, I kid you not – “Write a list of all the unstressed vowels.”

Maybe this week he’ll be told them while standing on his head. But that won’t address the problem – in fact, two problems – with this specific range.

Boredom and Spoilers

I should probably read a wider range of books myself. I’ll hold my hand up to sometimes limiting myself to SF and fantasy too much. But he does read a range, given the choice – and this selection doesn’t give him an option. Adapted classics, followed by… well, more adapted classics. He liked Frankenstein. Jekyll and Hyde scared him. Jane Eyre and Wuthering Heights bored him. Silas Mariner was an ordeal. This is not varied. If the school can’t afford to buy more (which, for such a small number of kids, I can understand) then why can’t he read his own as well? We’d happily accept a list of recommendations from the teacher. What about Harry Potter, Malorie Blackman, Young James Bond or Sherlock Holmes, Phillip Pullman, Michelle Paver (he liked this, thanks to @alomshaha for the suggestion)? If they have to be classics: Narnia, John Masefield, E. Nesbitt…

The other issue is that if he’s read – or been made to read – versions of great books like Frankenstein or the Three Musketeers now, what are the chances he’ll enjoy the full editions in a couple of years? Why spoil his future enjoyment this way? I doubt his GCSE English teacher will let him read Percy Jackson when the rest of the class are reading Jekyll and Hyde for the first time, just because he knows the ending. A crap film can spoil a good book (Ender’s Game and Starship Troopers, step forward) and I can’t see why this would be different. I’m sure the publishers have lots of reasons for getting ‘classics’ on to the list, but haven’t teachers pointed out that kids will grow up to have a lifetime of enjoying good books?

Ranting and Reflection

Having to assess all kids against one set of standards inevitably means that some find it too hard, some too easy. When I stopped thinking like a parent, and started thinking like a teacher, this made a lot more sense. I’m sure I’ve done this at some point and my reflections will be in a separate post, hopefully in a few days. For now I needed to rant, and hopefully you’re still reading to see I acknowledge that!

I’d really welcome any responses on this one – especially from any primary colleagues!

Heat Misconceptions

Like many of us, I’m currently spending the majority of my time helping students prepare for external exams. Because of how science exams now work in secondary school, most of my classes are facing one or more exams in the next few weeks, just for physics. Seven classes are doing GCSE content (2 x Yr9, 3 x Yr10, 2 x Yr11) and two classes are in sixth form.

Something I’ve spent a little time on has been prompted by the variety of answers to mock questions on heat transfer. It was clear that many able students were struggling with clear explanations – and perhaps understanding – of mechanisms of the transfer of thermal energy, as demonstrated by Qs 4 and 5 on the AQA P1 June 2013 paper. So I looked into it.

Examiner’s Reports

My first step was to check whether this was an isolated case or something seen for these exam papers when originally sat. I strongly recommend all colleagues, if they’re not already familiar with it, find where they can read the reports written after the exam for the benefit of teachers and exam boards. They’re available (delayed) for pupils too, but with AQA you need to go through the main subject page rather than to the quick ‘Past Papers’ link.

…nearly half of students scored two marks or less. Common mistakes were referring to ‘heat particles’, thinking that the vacuum stopped all forms of heat transfer, thinking that the vacuum contained air and referring to the transfer of ‘cold’.

…Students who referred to water particles often mistakenly referred to them ‘vibrating more’ as a result of the energy given, or to the particles themselves becoming less dense.

From AQA P1 June 2012 Report

So it wasn’t just my kids.

Now What?

I think of myself as a fairly evidence-based practitioner, so next I wanted to check out some wider sources. A quick search for ‘physics misconceptions heat’ has a large number of results, including one from more than 20 years ago which shows how established the problem is.

As a science teacher, Physics Education from the IOP and School Science Review from the ASE seemed a good place to look. Unfortunately both require memberships, a problem in terms of cost which I’ve blogged about before. Students’ misconceptions about heat transfer mechanisms and elementary kinetic theory is relevant, as is this resource available without login on the ASE site. R Driver’s book Making Sense of Secondary Science was one of several recommended during an #asechat “What misconceptions do students have in science?” in 2011.

I used the students’ answers as a way to diagnose the ‘alternative conceptions’ that they had built up over time. For many these had clearly been established long before my arrival, but I’m going to build some of the ideas into my next cycle of teaching for early intervention. Some of the points from Cyberphysics UK and PhysicsClassroom.com were also useful. What I produced – firstly as a scribbled list, then as a more formal activity, was the ‘Seven Sins of Heat Transfer’. In time I’d like to produce some confidence grids and link these to the diagnostic questions approach as explained at York Science. Concept cartoons with clear viewpoints let students explore different models without ‘owing up’ to ideas they think are wrong, which can be very helpful. And so here’s one of the great @DoTryThisAtHome cartoons:

 

Seven Sins of Heat Transfer

  • Heat rises
  • Particles of heat
  • Expanding particles
  • Shiny materials are good conductors
  • Cold gets in
  • Condensing and contracting are the same
  • Trapped particles can’t move through a vacuum flask

These are what I wrote while marking papers; I’ve just removed the profanity. My reading showed me that some were common alternative conceptions, while others demonstrated a poor understanding of technical terms, often made worse by persistent misuse in ‘everyday’ language. A bit of thinking, and more reading, helped me find ways to highlight these issues for students.

Printable version with prompt Qs: 7sins as .pdf

EDIT: I shouldn’t have needed prompting, but CathN suggested in the comments that model answers would be useful, particularly for non-specialists. And so I’ve put together a presentation going through each of the sections, explained more or less the way I would in class. Obviously colleagues will have their own thoughts and preferred analogies, but I’d love comments on possible improvements; simply click on the title slide below.

7sins

Alternatively: 7sins as .ppt

When time allows during revision, and certainly next time I teach this content, I’ll be linking these misconceptions explicitly with practical activities. I think I’ll also ban the use of ‘heat’ by itself. If students are forced to use ‘collisions between touching particles’, ‘energetic particles in a lower density region’ and ‘thermal radiation’ then we should be able to solve the sloppy language issue, at least.

Thoughts and comments on this very welcome; it strikes me that I could usefully spend time producing a series of lessons and resources on just this sort of thing. Exam question followed by diagnostic questions, circus of activities to highlight misconception, then applications of correct idea to new situation. So if anyone wants to pay me, well, you know where I am…

In the meantime:

I’m trying to track my impact (eg you using this resource or basing your own on my ideas). You don’t have to leave your name, just a few words about how what I did made a difference. If you’ve blogged about it, I’d love for you to include a link. Tweets are transient, comments on the posts are hard to collect together, but this would really help.

Blog Feedback via Google Form

 

Exam Paper Debriefs (Summer 2012)

I’m combining two resources into one post here, but hopefully they should still show up by searching. (He types, hurriedly adding some tags.) I’ve made two powerpoints, each matched to what I think are the easy marks available on the summer 2012 P1 and P2 exams from AQA. Useful as practice or as full mocks, I often have students go through them focusing on what they should all aim for, before checking through in more detail. Having students divide their missed marks (using this exam paper debrief pdf) into recall failures and method mistakes can be helpful.

If students are able, they could also be pointed towards the examiners’ reports, which are only available if you go through the subject link at AQA rather than the direct Past Papers route. If not, then this is our job anyway – perhaps something to consider as part of a backwards design approach?

P1 june2012 easy as ppt, for the P1 summer 2012 exam – see also my P1 summary activity.

P2 may2012 easy as ppt, for the P2 summer 2012 exam – see also my P2 summary activity.

And yes, before you ask – I am working on equivalent resources for more recent exams, hopefully to be done before we all need them for mocks. Although the summer 2013 papers haven’t shown up yet – is that because, without January 2014 papers to use, AQA are expecting those to be used as mocks too? Must check e-AQA… (adds to evergrowing to do list)

Finally; yes, I’ve been fairly quiet and quite down as of late; lots going on, I’ll be fine, send chocolate and coffee if feeling helpful. As that’s pretty much all I’ve been eating for a while, supplies are running low!

 

 

Too Much Applause?

A very quick one, because I’ve got marking looming as usual. I read an interesting post on Lifehacker about seeking feedback rather than applause. It reflected something we discussed at a recent department meeting, that we need to ensure that to help all students progress, we need to be specific with praise as well as constructive with criticism. I think we all know about giving students specific and measurable targets to improve when marking books; Underline all titles rather than Keep work neater for example.

But we need to do the same when we praise students too. We need to tell them why we thought that a piece of work was excellent, so they know to look back at it for guidance when they struggle with a related task or concept. Otherwise it’s just clapping. Applause is nice – but feedback is better.

My browser is refusing to let me add the link so I’ll just have to paste it: http://lifehacker.com/distinguish-between-feedback-and-applause-to-get-more-u-1500218034