Measurable Outcomes

Following a conversation on twitter about the phonics screening test administered in primary school, I have a few thoughts about how it’s relevant to secondary science. First, a little context – especially for colleagues who have only the vaguest idea of what I’m talking about. I should point out that all I know about synthetic phonics comes from glancing at materials online and helping my own kids with reading.

Synthetic Phonics and the Screening Check

This is an approach to teaching reading which relies on breaking words down into parts. These parts and how they are pronounced follow rules; admittedly in English it’s probably less regular than many other languages! But the rules are useful enough to be a good stepping stone. So far, so good – that’s true of so many models I’m familiar with from the secondary science classroom.

The phonics screen is intended, on the face of it, to check if individual students are able to correctly follow these rules with a sequence of words. To ensure they are relying on the process, not their recall of familiar words, nonsense words are included. There are arguments that some students may try to ‘correct’ those to approximate something they recognise – the same way as I automatically read ‘int eh’ as ‘in the’ because I know it’s one of my characteristic typing mistakes. I’m staying away from those discussions – out of my area of competence! I’m more interested in the results.

Unusual Results

We’d expect most attributes to follow a predictable pattern over a population. Think about height in humans, or hair colour. There are many possibilities but some are more common than others. If the distribution isn’t smooth – and I’m sure there are many more scientific ways to describe it, but I’m using student language because of familiarity – then any thresholds are interesting by definition. They tell us, something interesting is happening here.

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny …”

Possibly Isaac Asimov. Or possibly not.

It turns out that with the phonics screen, there is indeed a threshold. And that threshold just so happens to be at the nominal ‘pass mark’. Funny coincidence, huh?

The esteemed Dorothy Bishop, better known to me and many others as @deevybee, has written about this several times. A very useful post from 2012 sums up the issue. I recommend you read that properly – and the follow-up in 2013, which showed the issue continued to be of concern – but I’ve summarised my own opinion below.

phonics plot 2013
D Bishop, used with permission.

Some kids were being given a score of 32 – just passing – than should have been. We can speculate on the reasons for this, but a few leading candidates are fairly obvious:

  • teachers don’t want pupils who they ‘know’ are generally good with phonics to fail by one mark on a bad day.
  • teachers ‘pre-test’ students and give extra support to those pupils who are just below the threshold – like C/D revision clubs at GCSE.
  • teachers know that the class results may have an impact on them or the school.

This last one is the issue I want to focus on. If the class or school results are used in any kind of judgment or comparison, inside or outside the school, then it is only sensible to recognise that human nature should be considered. And the pass rate is important. It might be factor when it comes time for internal roles. It might be relevant to performance management discussions and/or pay progression. (All 1% of it.)

“The teaching of phonics (letters and the sounds they make) has improved since the last inspection and, as a result, pupils’ achievement in the end of Year 1 phonics screening check has gradually risen.”

From an Ofsted report

Would the inspector in that case have been confident that the teaching of phonics had improved if the scores had not risen?

Assessment vs Accountability

The conclusion here is obvious, I think. Most of the assessment we do in school is intended to be used in two ways; formatively or summatively. We want to know what kids know so we can provide the right support for them to take the next step. And we want to know where that kid is, compared to some external standard or their peers.

Both of those have their place, of course. Effectively, we can think of these as tools for diagnosis. In some cases, literally that; I had a student whose written work varied greatly depending on where they sat. His writing was good, but words were spelt phonetically (or fonetically) if he was sat anywhere than the first two rows. It turned out he needed glasses for short-sightedness. The phonics screen is or was intended to flag up those students who might need extra support; further testing would then, I assume, suggest the reason for their difficulty and suggested routes for improvement.

If the scores are also being used as an accountability measure, then there is a pressure on teachers to minimise failure among their students. (This is not just seen in teaching; an example I’m familiar with is ambulance response times which I first read about in Dilnot and Blastland’s The Tiger That Isn’t, but issues have continued eg this from the Independent) Ideally, this would mean ensuring a high level of teaching and so high scores. But if a child has an unrecognised problem, it might not matter how well we teach them; they’re still going to struggle. It is only by the results telling us that – and in some cases, telling the parents reluctant to believe it – that we can help them find individual tactics which help.

And so teachers, reacting in a human way, sabotage the diagnosis of their students so as not to risk problems with accountability. Every time a HoD puts on revision classes, every time students were put in for resits because they were below a boundary, every time an ISA graph was handed back to a student with a post-it suggesting a ‘change’, every time their PSA mysteriously changed from an okay 4 to a full-marks 6, we did this. We may also have wanted the best for ‘our’ kids, even if they didn’t believe it! But think back to when league tables changed so BTecs weren’t accepted any more. Did the kids keep doing them or did it all change overnight?

And was that change for the kids?

Any testing which is high-stakes invites participants to try to influence results. It’s worth remembering that GCSE results are not just high-stakes for the students; they make a big difference to us as teachers, too! We are not neutral in this. We sometimes need to remember that.


With thanks to @oldandrewuk, @deevybee and @tom_hartley for the twitter discussion which informed and inspired this post. All arguments are mine, not theirs.

You’re Welcome, Cambridge Assessment

It’s not often I can claim to be ahead of the trend. Pretty much never, to be honest. But this time I think I’ve managed it, and so I’m going to make sure all my readers, at least, know about it.

Recently the TES “exclusively reported” – which means other sites paraphrased their story and mentioned their name, but didn’t link – that Cambridge Assessment was considering ‘crowd-sourcing’ exam questions. This would involve teachers sending in possible questions which would then be reviewed and potentially used in external exams. Surplus questions would make up a large ‘question bank’.

I suggested this. This is, in fact, pretty much entirely my idea. I blogged ‘A New Exam Board’ in early 2012 suggesting teachers contribute questions which could then provide a range of sample papers as well as external exams. So it is not, despite what Tim Oates claims, a “very new idea.” Despite the similarity to my original post I do, however, have some concerns.

Backwards Backwards Design

So instead of teachers basing their classroom activities on giving kids the skills and knowledge they need to attempt exam questions, we’re doing it the other way around? As I’ve written before, it’s not necessarily a bad thing to ‘teach to the test’ – if the test is a good one. Writing exam questions and playing examiner is a valuable exercise, both for teachers and students, but the questions that result aren’t always helpful in themselves. As my OT-trained partner would remind me: “It’s the process, not the product.”

Credit

Being an examiner is something that looks good on a CV. It shows you take qualifications seriously and have useful experience. How can teachers verify the work they put into this? How can employers distinguish between teachers who sent in one dodgy question and those who shared a complete list, meticulously checked and cross-referenced? What happens when two or more teachers send in functionally identical questions?

Payment

A related but not identical point. How is the time teachers spend on this going to be recognized financially? And should it be the teacher, or the school? Unless they are paid, teachers are effectively volunteering their time and professional expertise, while Cambridge Assessment will continue to pay their permanent and contract staff. (I wonder how they feel about their work being outsourced to volunteers…)

Quality

It’s hardly surprising at this early stage that the details aren’t clear. One thing I’m interested in is whether the submissions shared as part of the ‘questions bank’ will go through the same quality control process as those used in the exams. If so, it will involve time and therefore money for Cambridge Assessment. If not, it risks giving false impressions to students who use the bank. And there’s nothing in the articles so far to say whether the bank of questions will be free to access or part of a paid product offered.

Student Advantage

Unless there are far fewer ‘donated’ questions than I’d expect, I don’t think we will really see a huge advantage held by students whose teachers contributed a question. But students are remarkably sensitive to the claims made by teachers about “there’s always a question on x” or “it wasn’t on last year’s paper, so expect y topic to come up”. So it will be interesting to see how they respond to their teachers contributing tot he exam they’ll be sitting.

You’re Welcome

I look forward to hearing from Cambridge Assessment, thanking me for the idea in the first place…

 

Unspecifications

I’m really starting to get annoyed with this, and I’m not even in the classroom full-time. I know that many colleagues – @A_Weatherall and @hrogerson on Staffrm for example – are also irritated. But I needed to vent anyway. It’ll make me feel better.

EDIT: after discussion on Twitter – with Chemistry teachers, FWIW – I’ve decided it might help to emphasise that my statements below are based on looking at the Physics specification. I’d be really interested with viewpoints from those who focus on teaching Biology and Chemistry, as well as those with opinions on whether I’ve accurately summed up the situation with Physics content or overreacted.

The current GCSE Science specifications are due to expire soon, to be replaced by a new version. To fit in with decisions by the Department for Education, there are certain changes to what we’ve been used to. Many others have debated these changes, and in my opinion they’re not necessarily negative when viewed objectively. Rather than get into that argument, I’ll just sum them up:

  1. Terminal exams at the end of year 11
  2. A different form of indirect practical skills assessment (note that ISAs and similar didn’t directly assess practical skills either)
  3. More content (100+ pages compared to the previous 70ish for AQA)
  4. Grades 9-1 rather than A*-G, with more discrimination planned for the top end (and, although not publicised, less discrimination between weaker students)

Now, like many other subjects, the accreditation process seems to be taking longer than is reasonable. It also feels, from  the classroom end, that there’s not a great deal of information about the process, including dates. The examples I’m going to use are for AQA, as that’s the specification I’m familiar with. At least partly that’s because I’m doing some freelance resource work and it’s matched to the AQA spec.

Many schools now teach GCSE Science over more than two years. More content is one of several reasons why that’s appealing; the lack of an external KS3 assessment removes the pressure for an artificial split in content. Even if the ‘official’ teaching of GCSE starts in Year 10, the content will obviously inform year 9 provision, especially with things like language used, maths familiarity and so on.

Many schools have been teaching students from a the first draft specification since last September. The exam boards are now working on version three.

The lack of exemplar material, in particular questions, mean it is very hard for schools to gauge likely tiers and content demand for ‘borderline’ students. Traditionally, this was the C-D threshold and I’m one of many who recognized the pressure this placed on schools with league tables, with teachers being pushed much harder to help kids move from a D to a C grade than C to B. the comparison is (deliberately) not direct. As I understand it an ‘old’ middle grade C is now likely to be a level 4, below the ‘good pass’ of a level 5.

Most schools start to set for GCSE groups long before the end of Year 9. Uncertainties about the grade implications will only make this harder.

The increased content has three major consequences for schools. The first is the teaching time needed as mentioned above. The second is CPD; non-specialists in particular are understandably nervous about teaching content at GCSE which until now was limited to A-level. This is my day-job and it’s frustrating not to be able to give good guidance about exams, even if I’m confident about the pedagogy. (For Physics: latent heat, equation for energy stored in a stretched spring, electric fields, pressure relationships in gases, scale drawings for resultant forces, v2 = u2 -2as, magnetic flux density.) The last is the need for extra equipment, especially for those schools which don’t teach A-level Physics, with the extra worry about required practicals.

Even if teachers won’t be delivering the new specification until September, they need to familiarize themselves with it now. Departments need to order equipment at a time of shrinking budgets.

I’m not going to suggest that a new textbook can solve everything, but they can be useful. Many schools have hung on in the last few years as they knew the change in specification was coming – and they’ve been buying A-level textbooks for that change! New textbooks can’t be written quickly. Proofreading, publishing, printing, delivery all take time. This is particularly challenging when new styles of question are involved, or a big change such as the new language for energy changes. Books are expensive and so schools want to be able to make a good choice. Matching textbooks to existing resources, online and paper-based, isn’t necessarily fast.

Schools need time to co-ordinate existing teaching resources, samples of new textbooks and online packages to ensure they meet student needs and cost limitations.

Finally, many teachers feel they are being kept in the dark. The first specification wasn’t accredited, so exam boards worked on a second. For AQA, this was submitted to Ofqual in December (I think) but not made available on the website. Earlier this month, Ofqual chose not to accredit this version, but gave no public explanation of why. Teachers are left to rely on individual advisers, hearsay and twitter gossip. This information would have given teachers an idea of what was safe to rely on and what was likely to change. It took several weeks for the new submission dates to appear on the website – now  mid-March – and according to Ofqual it can take eight weeks from submission to accreditation.

If these time estimates are correct, the new AQA specification may not be accredited until mid-May and as yet there is nothing on record about what was wrong with previous versions. Teachers feel they are being left in the dark yet will be blamed when they don’t have time to prepare for students in September

I think that says it all.

Square Pegs and Round Holes 1/2

My son is a keen and able reader. Not quite ten, he read and enjoyed The Hobbit earlier this year. He likes both Harry Potter and Alex Rider. David Walliams‘ books are now ‘too young for him’ and he’s a big fan of variations on classic myths and fairy tales – The Sisters Grimm and Percy Jackson, for example. He was a ‘free reader’ most of last year and continues to make progress when tested in school, in both reading and writing.

He’s now back on the reading scheme – level 17 Oxford. According to the official website of the series, these books are at a lower level than the reading age as assessed by the school last year of 11 years, 9 months. They’re short, mainly dull, and despite the claim of his teacher that he needs to be reading a wider variety the school stock are almost all adapted classics. Jane Eyre and Silas Mariner for a ten year old boy? Really?

We’ve got a good range at home, and he’s reading these in between finishing off the official school books (which he manages in less than an hour, but can’t change more than a couple of times a week). It’s not stopping him from reading. But I hate that for the first time in ages, my son sees reading as a chore.

You can probably tell I’m a little annoyed about all this.

Reasons and Excuses

I’m pretty sure that there are two reasons his school are being so inflexible. Firstly it’s a new scheme, a new teacher and they’ve got a lot on at this time of year. Only two kids – the other a year older – are on this level in the school. The scheme and approach probably work fine with everyone else, and adapting it to one student is a big time commitment. I understand that. I really do.

The other is about assessment. We’d assumed that the only way he can be assessed (via the Suffolk reading scale, apparently) is by reading the books that match it. We’re now not sure that’s right. The school have chosen an assessment strategy which doesn’t cater for the highest ability. It will be interesting to see how they try to show progress, seeing as these are too easy for him.

I think they didn’t believe at first how quickly he was reading them. When he demonstrated that he had understood, retained and could explain the books verbally, they tried to slow him down. “Write a review.” “Discuss it with your parents so they can write in your record.” And, I kid you not – “Write a list of all the unstressed vowels.”

Maybe this week he’ll be told them while standing on his head. But that won’t address the problem – in fact, two problems – with this specific range.

Boredom and Spoilers

I should probably read a wider range of books myself. I’ll hold my hand up to sometimes limiting myself to SF and fantasy too much. But he does read a range, given the choice – and this selection doesn’t give him an option. Adapted classics, followed by… well, more adapted classics. He liked Frankenstein. Jekyll and Hyde scared him. Jane Eyre and Wuthering Heights bored him. Silas Mariner was an ordeal. This is not varied. If the school can’t afford to buy more (which, for such a small number of kids, I can understand) then why can’t he read his own as well? We’d happily accept a list of recommendations from the teacher. What about Harry Potter, Malorie Blackman, Young James Bond or Sherlock Holmes, Phillip Pullman, Michelle Paver (he liked this, thanks to @alomshaha for the suggestion)? If they have to be classics: Narnia, John Masefield, E. Nesbitt…

The other issue is that if he’s read – or been made to read – versions of great books like Frankenstein or the Three Musketeers now, what are the chances he’ll enjoy the full editions in a couple of years? Why spoil his future enjoyment this way? I doubt his GCSE English teacher will let him read Percy Jackson when the rest of the class are reading Jekyll and Hyde for the first time, just because he knows the ending. A crap film can spoil a good book (Ender’s Game and Starship Troopers, step forward) and I can’t see why this would be different. I’m sure the publishers have lots of reasons for getting ‘classics’ on to the list, but haven’t teachers pointed out that kids will grow up to have a lifetime of enjoying good books?

Ranting and Reflection

Having to assess all kids against one set of standards inevitably means that some find it too hard, some too easy. When I stopped thinking like a parent, and started thinking like a teacher, this made a lot more sense. I’m sure I’ve done this at some point and my reflections will be in a separate post, hopefully in a few days. For now I needed to rant, and hopefully you’re still reading to see I acknowledge that!

I’d really welcome any responses on this one – especially from any primary colleagues!

Heat Misconceptions

Like many of us, I’m currently spending the majority of my time helping students prepare for external exams. Because of how science exams now work in secondary school, most of my classes are facing one or more exams in the next few weeks, just for physics. Seven classes are doing GCSE content (2 x Yr9, 3 x Yr10, 2 x Yr11) and two classes are in sixth form.

Something I’ve spent a little time on has been prompted by the variety of answers to mock questions on heat transfer. It was clear that many able students were struggling with clear explanations – and perhaps understanding – of mechanisms of the transfer of thermal energy, as demonstrated by Qs 4 and 5 on the AQA P1 June 2013 paper. So I looked into it.

Examiner’s Reports

My first step was to check whether this was an isolated case or something seen for these exam papers when originally sat. I strongly recommend all colleagues, if they’re not already familiar with it, find where they can read the reports written after the exam for the benefit of teachers and exam boards. They’re available (delayed) for pupils too, but with AQA you need to go through the main subject page rather than to the quick ‘Past Papers’ link.

…nearly half of students scored two marks or less. Common mistakes were referring to ‘heat particles’, thinking that the vacuum stopped all forms of heat transfer, thinking that the vacuum contained air and referring to the transfer of ‘cold’.

…Students who referred to water particles often mistakenly referred to them ‘vibrating more’ as a result of the energy given, or to the particles themselves becoming less dense.

From AQA P1 June 2012 Report

So it wasn’t just my kids.

Now What?

I think of myself as a fairly evidence-based practitioner, so next I wanted to check out some wider sources. A quick search for ‘physics misconceptions heat’ has a large number of results, including one from more than 20 years ago which shows how established the problem is.

As a science teacher, Physics Education from the IOP and School Science Review from the ASE seemed a good place to look. Unfortunately both require memberships, a problem in terms of cost which I’ve blogged about before. Students’ misconceptions about heat transfer mechanisms and elementary kinetic theory is relevant, as is this resource available without login on the ASE site. R Driver’s book Making Sense of Secondary Science was one of several recommended during an #asechat “What misconceptions do students have in science?” in 2011.

I used the students’ answers as a way to diagnose the ‘alternative conceptions’ that they had built up over time. For many these had clearly been established long before my arrival, but I’m going to build some of the ideas into my next cycle of teaching for early intervention. Some of the points from Cyberphysics UK and PhysicsClassroom.com were also useful. What I produced – firstly as a scribbled list, then as a more formal activity, was the ‘Seven Sins of Heat Transfer’. In time I’d like to produce some confidence grids and link these to the diagnostic questions approach as explained at York Science. Concept cartoons with clear viewpoints let students explore different models without ‘owing up’ to ideas they think are wrong, which can be very helpful. And so here’s one of the great @DoTryThisAtHome cartoons:

 

Seven Sins of Heat Transfer

  • Heat rises
  • Particles of heat
  • Expanding particles
  • Shiny materials are good conductors
  • Cold gets in
  • Condensing and contracting are the same
  • Trapped particles can’t move through a vacuum flask

These are what I wrote while marking papers; I’ve just removed the profanity. My reading showed me that some were common alternative conceptions, while others demonstrated a poor understanding of technical terms, often made worse by persistent misuse in ‘everyday’ language. A bit of thinking, and more reading, helped me find ways to highlight these issues for students.

Printable version with prompt Qs: 7sins as .pdf

EDIT: I shouldn’t have needed prompting, but CathN suggested in the comments that model answers would be useful, particularly for non-specialists. And so I’ve put together a presentation going through each of the sections, explained more or less the way I would in class. Obviously colleagues will have their own thoughts and preferred analogies, but I’d love comments on possible improvements; simply click on the title slide below.

7sins

Alternatively: 7sins as .ppt

When time allows during revision, and certainly next time I teach this content, I’ll be linking these misconceptions explicitly with practical activities. I think I’ll also ban the use of ‘heat’ by itself. If students are forced to use ‘collisions between touching particles’, ‘energetic particles in a lower density region’ and ‘thermal radiation’ then we should be able to solve the sloppy language issue, at least.

Thoughts and comments on this very welcome; it strikes me that I could usefully spend time producing a series of lessons and resources on just this sort of thing. Exam question followed by diagnostic questions, circus of activities to highlight misconception, then applications of correct idea to new situation. So if anyone wants to pay me, well, you know where I am…

In the meantime:

I’m trying to track my impact (eg you using this resource or basing your own on my ideas). You don’t have to leave your name, just a few words about how what I did made a difference. If you’ve blogged about it, I’d love for you to include a link. Tweets are transient, comments on the posts are hard to collect together, but this would really help.

Blog Feedback via Google Form

 

Exam Paper Debriefs (Summer 2012)

I’m combining two resources into one post here, but hopefully they should still show up by searching. (He types, hurriedly adding some tags.) I’ve made two powerpoints, each matched to what I think are the easy marks available on the summer 2012 P1 and P2 exams from AQA. Useful as practice or as full mocks, I often have students go through them focusing on what they should all aim for, before checking through in more detail. Having students divide their missed marks (using this exam paper debrief pdf) into recall failures and method mistakes can be helpful.

If students are able, they could also be pointed towards the examiners’ reports, which are only available if you go through the subject link at AQA rather than the direct Past Papers route. If not, then this is our job anyway – perhaps something to consider as part of a backwards design approach?

P1 june2012 easy as ppt, for the P1 summer 2012 exam – see also my P1 summary activity.

P2 may2012 easy as ppt, for the P2 summer 2012 exam – see also my P2 summary activity.

And yes, before you ask – I am working on equivalent resources for more recent exams, hopefully to be done before we all need them for mocks. Although the summer 2013 papers haven’t shown up yet – is that because, without January 2014 papers to use, AQA are expecting those to be used as mocks too? Must check e-AQA… (adds to evergrowing to do list)

Finally; yes, I’ve been fairly quiet and quite down as of late; lots going on, I’ll be fine, send chocolate and coffee if feeling helpful. As that’s pretty much all I’ve been eating for a while, supplies are running low!

 

 

Too Much Applause?

A very quick one, because I’ve got marking looming as usual. I read an interesting post on Lifehacker about seeking feedback rather than applause. It reflected something we discussed at a recent department meeting, that we need to ensure that to help all students progress, we need to be specific with praise as well as constructive with criticism. I think we all know about giving students specific and measurable targets to improve when marking books; Underline all titles rather than Keep work neater for example.

But we need to do the same when we praise students too. We need to tell them why we thought that a piece of work was excellent, so they know to look back at it for guidance when they struggle with a related task or concept. Otherwise it’s just clapping. Applause is nice – but feedback is better.

My browser is refusing to let me add the link so I’ll just have to paste it: http://lifehacker.com/distinguish-between-feedback-and-applause-to-get-more-u-1500218034

 

Generating Electricity (the YorkScience way)

If you’re a science teacher and not yet aware of the approach being trialed and developed as part of YorkScience, you should probably go have a look. My understanding is that it revolves around two linked ideas:

  • If we start by considering how we will assess competence, we can make sure we use appropriate activities to develop that competence.
  • Students should have access to everything we can provide about how answers are judged, so they can focus on what separates good and better answers.

To a novice, outsider or government minister, this may look like ‘teaching to the test’. The problem is that using that phrase shows the speaker thinks the test is no good. There’s no reason why we shouldn’t teach students to perform well in a test, if the test shows us something useful about the students’ knowledge and skill.

Anyway, political point made, let’s get on with the show.

I’ve produced (or more accurately, mostly finished producing) a resource which teachers can use to assess and develop understanding of generating electricity. There are two pdfs. The first is intended to make a booklet (4 sides of A4) which students write on over the course of a lesson and homework. The lesson plan would be as follows:

Starter: “Wind or Nuclear Power?” on the board, with cartoons or images if preferred.

Main:

  1. Students attempt multiple choice question in exam conditions. Allow time to fill in the ‘Why’ section too – perhaps ten minutes overall?
  2. Model the process of using the confidence grid using the first part of the exam question, ideally projecting the pdf and, by discussion, ticking appropriate boxes.
  3. Students work together in pairs or small groups to finish the grids.
  4. The second resource generating electricity diagnostic incomplete ms leads students through different (but overlapping) activities depending on which answers they chose. This is intended to mimic the teacher judgement which means you explain things in different ways depending on how and why a student made a mistake. This so far only has the first part (of four) completed.
  5. Discuss and compare the notes students have made to support them in each answer.
  6. Start (and probably set for homework) the notes summary on the last page of the booklet. This includes key words for prompts and gives some suggestions of format.

Plenary:

  • Which would you choose, Wind or Nuclear Power? Students must back up their opinion with a (revisable) fact.
  • What exam skills have we practised today?

I’m hoping to post the full version of the markscheme pages, as soon as they’re done. This may be completed as an extension activity by my triple group. 🙂 Comments and suggestions welcome, please let me know what you think.

Influential?

I’m still not really sure why I got invited. But I was. I’m currently on a train home after spending a couple of hours in a discussion at the Department for Education, after a message from the @educationgovUK account.

The aim of the session was to get some viewpoints from classroom teachers on the new/proposed National Curriculum. Apparently later sessions will hopefully include primary teachers, but this was secondary with a dash of special education. It wasn’t totally clear how the seven of us had been selected, although I presume there were others who declined for whatever reason. I want to make the point that I’m reporting general thoughts, from my POV, so please don’t assume I’m accurately quoting anyone else. Please let me know if and how I need to make corrections or clarifications.

EDIT: post by cleverfiend now up.

It felt like a positive session overall, although of course the real test will be if any of our suggestions are acted on. In no particular order:

We felt that the biggest issue facing schools and classroom teachers was a lack of time. This applies not only to the time needed to produce innovative and interesting activities, on a day to day basis, but the time between the specification being finalised and starting to teach it. The meeting was jointly led by @trussliz and @jimm2011, who appreciated our insistence that schools need to pay close attention to what Ofsted and the exam boards say, more than the criteria.

The uncertainty – perhaps exacerbated by recent rapid changes to assessment rules – was linked by @hgaldinoshea and Janet (twitter link tk) to the number of schools opting for the iGCSE route. We were assured that the English and Maths specifications (for first teaching from 2015) would be published imminently. The others will follow, although no date was given. @mary_uyseg emphasised several times that for schools, assessment models would always be one of the first concerns, both to provide the best for their students and also because of results affecting the institution collectively and the staff individually.

The difficulty of getting information out to schools and teachers about national curriculum changes was discussed. The expectation is that all schools – whether formally linked or not – are expected to ask their local Teaching School for advice with new curricula and specifications. Their support may involved a fee but the DfE has provided funding for them to take on this role, which is less specific than the responsibility historically held by LAs. (Even I was not so insensitive as to suggest that maybe there are better ways to address communication weaknesses than by leading new policy ideas to the Daily Mail or Times.) It was suggested that making sure exam boards and Ofsted pass on details, perhaps simultaneously tweeting links which could be RTed via subject associations, would be worthwhile. I made the point that interest and participation from Department staff in twitter chats would be an easy way to show engagement, and apparently this will be happening starting with the next #sltchat.

(A personal aside; although it was suggested that Michael Gove take an overt interest in such things, I actually think it would be counterproductive. Not least because it would be harder for him to justify his errors (whether you consider them rare or frequent) to such a polarised audience. And the work of the misguided and cowardly @toryeducation tweeter doesn’t count as engagement.)

The balance between freedom to innovate and the time needed was raised. @oldandrewuk was not the only one to point that although the old QCA schemes of work were perhaps unnecessarily detailed, at least there was much less ambiguity. @cleverfiend used the example of levels – a whole different argument – to point out that schools would end up adopting any offered alternative simply to save valuable time. (If I had thought of it, I would have contrasted the different markets for off the peg and bespoke tailoring. Schools tend to offer uniforms in standard sizes because they work well enough in most cases. The benefit of individual fitted versions of the clothes don’t justify the cost in terms of work needed.)

It was suggested that subject associations would be in good positions to develop and share possible teaching routes once the exam specifications were available, including exam formats and timing. It was agreed that better links with primary are needed, and Liz Truss acknowledged that the new details will place demands on staff, especially areas like languages in primary. We suggested offering funding to subject groups like the ASE to improve their reach, at least during the transition.

Speaking of subject specialisms, it emerged that there are several expert discussion groups that are hosted at the department, made up of teachers and other educators. They are not paid for their time, receiving travel expenses while they address concerns like ITT provision during changing specifications. Readers may already be aware of the weakness of this model as demonstrated by the recent demise of the expert group looking at ICT/computing, (Link tk) an issue which was raised and received a very clear “No comment.” It wasn’t clear how these expert groups were set up and how they report outside the Department, let alone how they recruit.

I’m sure I’ve missed subtle points, and possibly major ones. Links to the national curriculum documents I reviewed ahead of time (found by me, nothing that’s not online) and twitter accounts will be sorted as soon as I’m a desktop, not tapping away on my tablet on a crowded train. Hope this makes some kind of sense in the mean time.

Two postscripts:
1 I’ve honestly no idea how my name came up. All of the teacher/blogger attendees made clear we had never claimed to speak for anyone except ourselves. I hope that for future events that @educationgovuk is able to sort out some kind of nominations system.
2 Yes I’ve met @oldandrewuk and he looks exactly like his twitter profile picture.

From the Classroom Up

So we had a Journal Club.

Getting on for 200 tweets from a small (but dedicated) group of Science teachers, with some tentative conclusions as Storified elsewhere. Although participants commented on the weak results from the case study – unavoidable with small groups on a single site – it certainly seemed interesting.

Could we show improved understanding, and hence achievement, by moving away from HSW skills integrated with content, and instead start KS3 by teaching these skills discretely? Enquiring minds want to know. If only there was a way to expand an interesting case study to get more reliable and/or generally applicable results. If only there was a general move towards gathering more evidence at a classroom level that could be widely shared in the profession…

“Hang on, fellas. I’ve got an idea.”

hangon

 Where We Are

An interesting case study has found a benefit from one approach (discrete teaching of Sc1 skills at the start of KS3) over another (gradually introduced over the year). A small sample was involved at one school.

What We Could Do Next

As several people pointed out, we need more data before proceeding to a full trial. The next step would be collecting information about schools which use these two approaches and how well they work. How do schools assess students’ understanding of the language and methods? A Googleform or similar would be an easy way to acquire the data without a high cost at this stage.

Trial Design

I should possibly leave this to the experts, but the whole point of this teacher-led approach is to get us involved. (Alternatively, the DfE could press release a huge study but not tell us what they’re actually investigating.) As I understand it, we’d need to

  1. Get an education researcher to co-ordinate design/timetables/data analysis.
  2. Produce standard resources to be used either all together (discrete unit) or spread through the year (integrated learning) – this could be based on CASE or similar approaches.
  3. Design outcome measure, ideally something cheap and non-intrusive.
  4. Recruit participant schools.
  5. Visit schools during trial (in both arms) to observe delivery, consider deviation from ‘ideal script’, and also raise profile of organisation/idea.
  6. This provides good ‘teacher/researcher’ links and could be used as a way to observe CSciTeach candidates perhaps, or at least accredit ‘teacher-researchers’.
  7. Collect data on outcomes for both groups. Tests need to be blinded, ideally marked externally or by computer. Workload!
  8. Data analysis – which approach gives the best results? Is this correlated with some characteristic of the schools?
  9. Share results widely, provide materials and best practice guidance based on evidence.
  10. Plan the next RCT, perhaps looking at the materials used.

Funding and Support

I’ve a few ideas, but they’re probably way off. I don’t know how much it would cost, either in terms of money or time. The EEF is focused on attainment of particular groups, so I don’t know how relevant it would be to their aims. (But their funding round closes in October.) The ASE, I suspect, would have the organisational skills but not the money. Might the Science Learning Centres have a part to play, if we consider this from the point of view of teachers developing themselves professionally while conducting research? It would also nicely complement some of the aims of YorkScience. And we shouldn’t forget the original author, Andrew Grime, although I don’t think he’s on Twitter. (We probably should have tried harder to get in touch with him before the Journal Club session, come to think of it…

I’m sure there are many other questions that could be answered in UK Science classrooms. But the question should be, which one shall we try to answer first? Instead of complaining from the sidelines, teachers should, ideally through coordinated projects and their professional associations, get involved. This seems like an ideal chance to make the most of the Evidence-Based Teaching Bandwagon and could perhaps be launched/discussed at ResearchED2013. If we want to make something of it.

Do we?

 

An apologetic postscript: sorry to followers of the blog who got spammy emails about a post which wasn’t there. This was because I hadn’t read the fine print on Storify about not being able to embed the material on a WordPress.com blog.  It’s the same Storify I link to above, now happily live at the SciTeachJC site.