Measurable Outcomes

Following a conversation on twitter about the phonics screening test administered in primary school, I have a few thoughts about how it’s relevant to secondary science. First, a little context – especially for colleagues who have only the vaguest idea of what I’m talking about. I should point out that all I know about synthetic phonics comes from glancing at materials online and helping my own kids with reading.

Synthetic Phonics and the Screening Check

This is an approach to teaching reading which relies on breaking words down into parts. These parts and how they are pronounced follow rules; admittedly in English it’s probably less regular than many other languages! But the rules are useful enough to be a good stepping stone. So far, so good – that’s true of so many models I’m familiar with from the secondary science classroom.

The phonics screen is intended, on the face of it, to check if individual students are able to correctly follow these rules with a sequence of words. To ensure they are relying on the process, not their recall of familiar words, nonsense words are included. There are arguments that some students may try to ‘correct’ those to approximate something they recognise – the same way as I automatically read ‘int eh’ as ‘in the’ because I know it’s one of my characteristic typing mistakes. I’m staying away from those discussions – out of my area of competence! I’m more interested in the results.

Unusual Results

We’d expect most attributes to follow a predictable pattern over a population. Think about height in humans, or hair colour. There are many possibilities but some are more common than others. If the distribution isn’t smooth – and I’m sure there are many more scientific ways to describe it, but I’m using student language because of familiarity – then any thresholds are interesting by definition. They tell us, something interesting is happening here.

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny …”

Possibly Isaac Asimov. Or possibly not.

It turns out that with the phonics screen, there is indeed a threshold. And that threshold just so happens to be at the nominal ‘pass mark’. Funny coincidence, huh?

The esteemed Dorothy Bishop, better known to me and many others as @deevybee, has written about this several times. A very useful post from 2012 sums up the issue. I recommend you read that properly – and the follow-up in 2013, which showed the issue continued to be of concern – but I’ve summarised my own opinion below.

phonics plot 2013
D Bishop, used with permission.

Some kids were being given a score of 32 – just passing – than should have been. We can speculate on the reasons for this, but a few leading candidates are fairly obvious:

  • teachers don’t want pupils who they ‘know’ are generally good with phonics to fail by one mark on a bad day.
  • teachers ‘pre-test’ students and give extra support to those pupils who are just below the threshold – like C/D revision clubs at GCSE.
  • teachers know that the class results may have an impact on them or the school.

This last one is the issue I want to focus on. If the class or school results are used in any kind of judgment or comparison, inside or outside the school, then it is only sensible to recognise that human nature should be considered. And the pass rate is important. It might be factor when it comes time for internal roles. It might be relevant to performance management discussions and/or pay progression. (All 1% of it.)

“The teaching of phonics (letters and the sounds they make) has improved since the last inspection and, as a result, pupils’ achievement in the end of Year 1 phonics screening check has gradually risen.”

From an Ofsted report

Would the inspector in that case have been confident that the teaching of phonics had improved if the scores had not risen?

Assessment vs Accountability

The conclusion here is obvious, I think. Most of the assessment we do in school is intended to be used in two ways; formatively or summatively. We want to know what kids know so we can provide the right support for them to take the next step. And we want to know where that kid is, compared to some external standard or their peers.

Both of those have their place, of course. Effectively, we can think of these as tools for diagnosis. In some cases, literally that; I had a student whose written work varied greatly depending on where they sat. His writing was good, but words were spelt phonetically (or fonetically) if he was sat anywhere than the first two rows. It turned out he needed glasses for short-sightedness. The phonics screen is or was intended to flag up those students who might need extra support; further testing would then, I assume, suggest the reason for their difficulty and suggested routes for improvement.

If the scores are also being used as an accountability measure, then there is a pressure on teachers to minimise failure among their students. (This is not just seen in teaching; an example I’m familiar with is ambulance response times which I first read about in Dilnot and Blastland’s The Tiger That Isn’t, but issues have continued eg this from the Independent) Ideally, this would mean ensuring a high level of teaching and so high scores. But if a child has an unrecognised problem, it might not matter how well we teach them; they’re still going to struggle. It is only by the results telling us that – and in some cases, telling the parents reluctant to believe it – that we can help them find individual tactics which help.

And so teachers, reacting in a human way, sabotage the diagnosis of their students so as not to risk problems with accountability. Every time a HoD puts on revision classes, every time students were put in for resits because they were below a boundary, every time an ISA graph was handed back to a student with a post-it suggesting a ‘change’, every time their PSA mysteriously changed from an okay 4 to a full-marks 6, we did this. We may also have wanted the best for ‘our’ kids, even if they didn’t believe it! But think back to when league tables changed so BTecs weren’t accepted any more. Did the kids keep doing them or did it all change overnight?

And was that change for the kids?

Any testing which is high-stakes invites participants to try to influence results. It’s worth remembering that GCSE results are not just high-stakes for the students; they make a big difference to us as teachers, too! We are not neutral in this. We sometimes need to remember that.


With thanks to @oldandrewuk, @deevybee and @tom_hartley for the twitter discussion which informed and inspired this post. All arguments are mine, not theirs.

Advertisements

Required Practicals

Morning all. I was at the Northern #ASEConf at the weekend, had a good time and had lots to think about. I’m going to try really hard to blog it this week, but I’m buried under a ton of stuff and pretty much every person in my immediate family is either ill, recovering or about to go into hospital. And Trump apparently won, which makes me think it’s time to dig a fallout shelter and start teaching my kids how to trap rabbits for food.

Anyway.

One of the recurring discussions between science teachers is about the new required practicals for the GCSE specs. I’m trying to put some resources together for the physics ones as part of my day job, on TalkPhysics (free to join, please get involved) and thought I’d share a few ideas here too.

Who Cares?

The exam boards don’t need lab books. There is no requirement for moderation or scrutiny. There is no set or preferred format. And, realistically, until we’ve seen something better than the specimen papers there’s no point trying to second-guess what the students will be expected to do in the summer of 2018.

So apart from doing the practicals, as part of our normal teaching, in the normal way, why should we do anything different? Why should we worry the kids about them? Why should we worry about them? There’s time for that in the lead up to the exams, in a year’s time, when we’d revise major points anyway. For now, let’s just focus on good, useful practical work. I’ve blogged about this before, and most of it comes down to more thinking, less doing.

Magic Words

What we can do is make sure kids are familiar with the language – but this shouldn’t be just about the required practicals. So I put together some generic questions, vaguely inspired by old ISAs (and checking my recall with the AQA Science Vocab reference) and ready to print. My thinking is that each laminated card is handed to a different group while they work. They talk about it while doing the practical, write their answers on it, then they get added to a wall in the lab. This offers a quick review and a chance for teachers to see how ids are getting on with the vocab. The important thing – in my view, at least – is that it has to be for every practical. This is about improving fluency by use of frequent testing. And it ticks the literacy box too.

EDITED: more cards added, thanks to suggestion from @tonicha128 on Twitter.

So here you go: prac-q-cards-v2 as PDF.

Please let me know what you think, whether I’ve made any mistakes, and how it works if you want to try it out. It would be easy to produce a mini-test with a selection of these questions, or better ones, for kids to do after each practical. Let’s get them to the stage of being so good with these words that they’re bored by being asked the questions.

Square Pegs and Round Holes 1/2

My son is a keen and able reader. Not quite ten, he read and enjoyed The Hobbit earlier this year. He likes both Harry Potter and Alex Rider. David Walliams‘ books are now ‘too young for him’ and he’s a big fan of variations on classic myths and fairy tales – The Sisters Grimm and Percy Jackson, for example. He was a ‘free reader’ most of last year and continues to make progress when tested in school, in both reading and writing.

He’s now back on the reading scheme – level 17 Oxford. According to the official website of the series, these books are at a lower level than the reading age as assessed by the school last year of 11 years, 9 months. They’re short, mainly dull, and despite the claim of his teacher that he needs to be reading a wider variety the school stock are almost all adapted classics. Jane Eyre and Silas Mariner for a ten year old boy? Really?

We’ve got a good range at home, and he’s reading these in between finishing off the official school books (which he manages in less than an hour, but can’t change more than a couple of times a week). It’s not stopping him from reading. But I hate that for the first time in ages, my son sees reading as a chore.

You can probably tell I’m a little annoyed about all this.

Reasons and Excuses

I’m pretty sure that there are two reasons his school are being so inflexible. Firstly it’s a new scheme, a new teacher and they’ve got a lot on at this time of year. Only two kids – the other a year older – are on this level in the school. The scheme and approach probably work fine with everyone else, and adapting it to one student is a big time commitment. I understand that. I really do.

The other is about assessment. We’d assumed that the only way he can be assessed (via the Suffolk reading scale, apparently) is by reading the books that match it. We’re now not sure that’s right. The school have chosen an assessment strategy which doesn’t cater for the highest ability. It will be interesting to see how they try to show progress, seeing as these are too easy for him.

I think they didn’t believe at first how quickly he was reading them. When he demonstrated that he had understood, retained and could explain the books verbally, they tried to slow him down. “Write a review.” “Discuss it with your parents so they can write in your record.” And, I kid you not – “Write a list of all the unstressed vowels.”

Maybe this week he’ll be told them while standing on his head. But that won’t address the problem – in fact, two problems – with this specific range.

Boredom and Spoilers

I should probably read a wider range of books myself. I’ll hold my hand up to sometimes limiting myself to SF and fantasy too much. But he does read a range, given the choice – and this selection doesn’t give him an option. Adapted classics, followed by… well, more adapted classics. He liked Frankenstein. Jekyll and Hyde scared him. Jane Eyre and Wuthering Heights bored him. Silas Mariner was an ordeal. This is not varied. If the school can’t afford to buy more (which, for such a small number of kids, I can understand) then why can’t he read his own as well? We’d happily accept a list of recommendations from the teacher. What about Harry Potter, Malorie Blackman, Young James Bond or Sherlock Holmes, Phillip Pullman, Michelle Paver (he liked this, thanks to @alomshaha for the suggestion)? If they have to be classics: Narnia, John Masefield, E. Nesbitt…

The other issue is that if he’s read – or been made to read – versions of great books like Frankenstein or the Three Musketeers now, what are the chances he’ll enjoy the full editions in a couple of years? Why spoil his future enjoyment this way? I doubt his GCSE English teacher will let him read Percy Jackson when the rest of the class are reading Jekyll and Hyde for the first time, just because he knows the ending. A crap film can spoil a good book (Ender’s Game and Starship Troopers, step forward) and I can’t see why this would be different. I’m sure the publishers have lots of reasons for getting ‘classics’ on to the list, but haven’t teachers pointed out that kids will grow up to have a lifetime of enjoying good books?

Ranting and Reflection

Having to assess all kids against one set of standards inevitably means that some find it too hard, some too easy. When I stopped thinking like a parent, and started thinking like a teacher, this made a lot more sense. I’m sure I’ve done this at some point and my reflections will be in a separate post, hopefully in a few days. For now I needed to rant, and hopefully you’re still reading to see I acknowledge that!

I’d really welcome any responses on this one – especially from any primary colleagues!

Moving Beyond Predict/Observe/Explain

I don’t remember when I first used the idea of breaking down a demonstration for students by having them follow the POE format:

  • Predict what will happen
  • Observe what actually happens
  • Explain it in context

I think a lot of science teachers used this before – or even without – referencing the ideas of Michael Bowen, who explains the approach in this video. He wasn’t the first, but I tracked down the link via the site of the National Science Teachers Association in the US. There are several papers available there, for example this from a decade ago about hypothesis-based learning, which makes explicit the difference between a hypothesis and a prediction. It’s easy to see how these steps link nicely with a 5/7Es planning method. But I think it’s worth adding some steps, and it’s interesting to see how it might have developed over time. How students cope with these stages is an easy way to approach formative assessment of their skills in thinking about practicals, rather than simply doing them.

Please note – I’m sure that I’m missing important references, names and details, but without academic access I simply can’t track original papers or authors. My apologies and please let me know what I’m missing in this summarised family tree!

PEOE: I think this because

To stop students making wild speculations we need to involve them in a conversation justifying their predictions. I suppose this is a first step in teaching them about research, to reference their thoughts. I find this needs guidance as many students mix up the two uses of explain; the derivation of their prediction and the link to accepted theory.

PODME: Recording what we observe

I got this from Katy Bloom (at York SLC, aka @bloom_growhow) I think after chatting at a TweetUp. I’m paraphrasing her point: in Science it’s not enough simply to observe, we must also share that observation. This can take two forms, Describing in words and Measuring in numbers. The explanation then becomes about the pattern rather than a single fact or observation. Bonus points to students who correctly suggest the words qualitative and quantitative for the observations here!

PBODME: My current approach

I’ve tweaked this slightly by making the first explanation phase explicit. The display is on the wall and students can apply this (with varying degrees of success) from year 7 practicals with burning candles to year 13 physics investigations into gamma intensity affected by thickness of lead shielding.

  • Prediction of outcome
  • Because of hypothesis based on life experience, context or research
  • Observation using senses, measuring devices
  • Description in words of what typically happens (sometimes as commentary during practical)
  • Measurement using appropriate units, with derived results and means where needed
  • Explanation of results, patterns, anomalies and confidence

Is it getting ungainly? Having this structure means students can see the next step in what they are doing, and are hopefully able to ask themselves questions about how to develop a practical further. I suppose you could argue that the original POE approach is the foundation, and these stages allow us to extend students (or ideally allows them to extend themselves).

PBODMEC: Why does it matter?

In many ways, the natural next step would be about Context – why should we care about the results and what difference do they make to what we know, what we can do or what we can make?

I plan to follow up this post with the printable resources (wall display and a student capability checklist) but they’ll have to wait until I’m home. In the mean time, I’d welcome any thoughts or comments – especially any with links to other formats and their uses in the school science lab.

Power Stations

“Okay, class… everybody… I’m not going to teach you about power stations. You need to know all the features but you’re going to be teaching each other. In groups of three you’re going to be putting together a presentation on one of the energy resources…”

Hands up if this sounds familiar? I’ve used variations on this theme for years, partly because I’m lazy but mainly because it works. I’ve fine-tuned it, of course; I now start off with two example presentations, one reasonable and one awful, and have the students tell me what they need to avoid.

If you can’t be a good example then you’ll just have to be a horrible warning.

Catherine Aird

But it doesn’t always work very well, even if you give them a energy resources blank table to complete as they listen. This year I’ve ended up trying out some different approaches and thought it might be worth sharing them.

Small changes

For chatty groups, how about having the presentations put together in the same way, but then present as part of a circus or marketplace activity? Students only need to speak to a handful of classmates at a time, and they get to rehearse it too. They can complete the same blank template as they work and ask questions they might not check if in a larger group. The downside is that you can’t listen in to correct misconceptions; I had students email their presentations first, then gave feedback before they shared with each other. Afterwards, of course, the powerpoints can be added to a shared drive through school. If you’ve the resources, kids could be videoed presenting for long term storage.

Roleplay

In small groups, students could identify viewpoints for and against different power stations. This risks being more about emotion than explanations, but doesn’t have to take a long time in the classroom. Choose good roles and after each discussion they can add + and points to a whiteboard; this can be photographed for later recall. Offer bonus points for students able to identify bigger patterns such as ‘fossil fuels all contribute to climate change’ or ‘renewable resources are often unreliable’.

Top Trumps

Some groups love the idea of choosing four or five categories then scoring each power station from 10 (fantastic) to 1 (awful). Some kids struggle with the arbitrary nature of the scores, while others get bogged down in irrelevant squabbles. I found that using the category definitions as a starter got them more or less focussed. Dissuading them from spending the majority of the time drawing pictures was an issue! This led me to a slightly different approach, which I tweeted.

Effectively I gave the students a power station scorecard listing the main ways in which two power stations could be compared. In pairs they had to choose one each, then discuss which ‘won’ each round. Finally they had to choose an overall winner. To make life more complicated, simply give the class a new location every five minutes. More able swtudents will recognise that these factors do not have equal weighting – you could discuss with them that a long-term view might award double points for ‘winning’ some of the rounds.

deathmatch1

Review

The cards ideas above are both good for reviewing content – you could also allow more time but provide resources like textbooks or laptops (or BYOD). To quickly review the content, it’s easy to produce a simple card sort which students can arrange into renewable/non, thermal/kinetic, carbon contributors/neutral and so on.

Hope some of these ideas are useful – please let me know if so!

Science in the Media

This week’s Inside Health had not one but two great items for science lessons. I just wanted to put together a quick post so this will be mainly links and ideas rather than detailed resources.

Sources

You can follow the title above for the programme page, complete with transcript and their own links. My focus is on the two very different approaches to sharing ‘discoveries’ demonstrated by the programme.

The recent decision by NICE to use Tamoxifen ‘off-label’ for the prevention of breast cancer, in high-risk groups, has had a lot of media attention. @drmarkporter and his studio guests nicely referenced the negatives as well as the positives, mentioning side-effects and comparing the benefits to pre-emptive surgery (as chosen by Angelina Jolie).

As a contrast, the press release a little while back about the use of antibiotics to treat lower back pain seems to have been wildly optimistic. As I tweeted during the programme:

The authors had an undeclared financial interest and the trial was very small; it also seems that the media were encouraged to hype the results far beyond the very small group of back-pain sufferers who would actually be eligible. I strongly recommend listening to the programme, which can also be downloaded from the Inside Health podcast page.

Teaching

Lots of useful questions and lots of likely arguments! My personal choice would be to have a class (probably an able GCSE group or perhaps A-level?) split into pairs or threes to research different aspects of reading a paper. There’s a fantastic page at NHS Behind the Headlines, where you can also see their own take on both of these stories (antibiotics for back pain, preventing breast cancer).

The ideas for the students to consider will revolve around three main concepts: benefit, risk and (financial) cost. These can be approached in several ways:

  • Claimed vs actual benefit
  • Conflict of interest
  • Placebo effects
  • Other choices (eg lifestyle changes) offering equivalent benefits
  • Side effects
  • Definitions of high-risk groups
  • Who pays for treatment
  • Number needed to treat (NNT)

It might also be useful to provide students with printed copies of news stories, as well as a good summary of each piece of research, to see how well the downsides as well as advantages are covered. Cross-curricular links with literacy and media studies, anyone?

As I’m not teaching students who would benefit from these kinds of discussions, I can’t speak from experience – but I hope my ideas will prove useful to colleagues. Please let me know if so!

6 Mark Questions

This is one approach to teaching the dreaded 6 mark AQA questions. I’d be interested in comments or suggestions, as ever. The powerpoint that goes along with it was set up for B1, but is obviously easily changed. 6 Mark Questions as ppt.

Objectives

  • Recap key facts
  • Improve structure of answers to 6 mark questions
  • (Appreciate that it’s hard to write good 6 mark questions and markschemes)

Starter

Question on board, set timer running: “You have 6 minutes.”

I do it, We do it together

Ask what they think the aim of the lesson is.

6 mark questions may require explanations, examples to illustrate a specified concept, judgements of advantages and disadvantages, a description of a process or an experimental method. Marks are awarded for scientific content and the quality of the writing. This means key ideas must be clear and the explanation must make sense, the points in a logical order. Most students lose marks because their answers lack sufficient detail eg scientific vocabulary or because their answer is rambling or confused. Markschemes will usually include graded answers (low=1-2 marks, 3-4, 5-6) and examiners will decide which description fits best, then award the higher or lower score depending on the quality of writing. Aim for between 4 and 6 scientific points or steps in a process; if opposing viewpoints are needed include points for and against, or examples of plants and animals etc.

Introduce method:

  • Bullet point ideas
  • Number the points to give a logical sequence, adding or removing points.
  • Use this order to write coherent sentences.

Model with a new question, ask students to consider how they would structure their answer, show numbers, ask them to discuss possible sentences based on these points. Compare with each other, pick up on details needed by examiner.

You do it together

Give them more questions, have them discuss one in pairs while they attempt it. Collaboration should be about making suggestions and producing two different answers which can be compared, not one identical answer. You could give a choice or set it by rows. Go through example bullet points, discuss gaps, additions and exclusions. Elicit possible/useful connectives.

You do it alone

Attempt a question in exam conditions, following method. Compare to markscheme (ideally this one should be a past or sample question with specified allowed answers) and make specific improvements. Return to the original Starter question and annotate their answer, explaining why they would change various parts.

Extension

  • Have students write their own questions and markschemes for specific points in the syllabus. Linking this to higher order tasks via Blooms or SOLO may be useful.
  • Use the questions to play consequences where one student writes a question, one writes bullet points, one sequences and a last writes full sentences. This will end up with four complete answers which can then be discussed.
  • Give sample answers and have students mark them, first with and then without a markscheme. What do they forget? What level of detail is required?

Thoughts?

UPDATE: A useful approach from @gregtheseal via twitpic, and I like the ‘CUSTARD’ mnemonic shared by @IanMcDaid. Thank you!