Doing an ISA – Pre-Practical

There will be a second post in a few days, if I can fit it in between coughing, marking and spending time with my family. Please excuse the brevity, but it seems highly unlikely that my broadband connection – thank you Talk Talk – will last long enough for my usual wittering.

This is intended for those of us who teach GCSE Science with AQA, to help with the joy of an ISA. Of course we’ve no idea what format this will take once Gove’s messed around with it, but I can be fairly confident that even he couldn’t make it any worse. I’ve blogged before about the weaknesses I see with the current model, and what I’ve done to address them. Here’s the resources I’m currently using to try and help my classes. They should work, with tweaking of course, for any variant of the AQA Science courses. Click on the image for the presentation:

ISA preprac

I found that my students, despite having been shown the sample exam papers while they researched, struggled to include all relevant information on their Research Notes sheets. My solution was to produce an extra sheet with more detailed prompts, similar to those in the presentation above, which they could fill in. I had them keep the exam paper and markscheme open in an extra tab, and annotate their sheet with the linked question numbers for each fact. They then transferred their messy information to the official sheets, which of course acts as another rehearsal before the exam.

ISA preprac as .pdf

Please let me know what you think, good and bad. The ‘post-prac’ equivalents should be up by the end of halfterm, subject to the usual caveats.

Advertisement

6 Mark Questions

This is one approach to teaching the dreaded 6 mark AQA questions. I’d be interested in comments or suggestions, as ever. The powerpoint that goes along with it was set up for B1, but is obviously easily changed. 6 Mark Questions as ppt.

Objectives

  • Recap key facts
  • Improve structure of answers to 6 mark questions
  • (Appreciate that it’s hard to write good 6 mark questions and markschemes)

Starter

Question on board, set timer running: “You have 6 minutes.”

I do it, We do it together

Ask what they think the aim of the lesson is.

6 mark questions may require explanations, examples to illustrate a specified concept, judgements of advantages and disadvantages, a description of a process or an experimental method. Marks are awarded for scientific content and the quality of the writing. This means key ideas must be clear and the explanation must make sense, the points in a logical order. Most students lose marks because their answers lack sufficient detail eg scientific vocabulary or because their answer is rambling or confused. Markschemes will usually include graded answers (low=1-2 marks, 3-4, 5-6) and examiners will decide which description fits best, then award the higher or lower score depending on the quality of writing. Aim for between 4 and 6 scientific points or steps in a process; if opposing viewpoints are needed include points for and against, or examples of plants and animals etc.

Introduce method:

  • Bullet point ideas
  • Number the points to give a logical sequence, adding or removing points.
  • Use this order to write coherent sentences.

Model with a new question, ask students to consider how they would structure their answer, show numbers, ask them to discuss possible sentences based on these points. Compare with each other, pick up on details needed by examiner.

You do it together

Give them more questions, have them discuss one in pairs while they attempt it. Collaboration should be about making suggestions and producing two different answers which can be compared, not one identical answer. You could give a choice or set it by rows. Go through example bullet points, discuss gaps, additions and exclusions. Elicit possible/useful connectives.

You do it alone

Attempt a question in exam conditions, following method. Compare to markscheme (ideally this one should be a past or sample question with specified allowed answers) and make specific improvements. Return to the original Starter question and annotate their answer, explaining why they would change various parts.

Extension

  • Have students write their own questions and markschemes for specific points in the syllabus. Linking this to higher order tasks via Blooms or SOLO may be useful.
  • Use the questions to play consequences where one student writes a question, one writes bullet points, one sequences and a last writes full sentences. This will end up with four complete answers which can then be discussed.
  • Give sample answers and have students mark them, first with and then without a markscheme. What do they forget? What level of detail is required?

Thoughts?

UPDATE: A useful approach from @gregtheseal via twitpic, and I like the ‘CUSTARD’ mnemonic shared by @IanMcDaid. Thank you!

Ofqual’s Absolute Error

In science lessons we teach students about the two main categories of error when taking readings. (And yes, I know that it’s a little more complicated than that.) We teach about random and systematic error.

Random errors are the ones due to inherently changing and unpredictable variables. They give readings which may be above or below the so-called ‘true value’. We can make allowances for them by repeating the reading, keeping all control variables the same, then finding a mean value. The larger the range, the bigger the potential random error – this is now described as the precision of the reading. I sometimes have my students plot this range as an error bar.

A systematic error is an artifact of the measuring system. It will be consistent, in direction and size (perhaps in proportion to the reading, rather than absolute). A common type is a ‘zero error’, where the measuring device does not start at zero so all readings are offset from the true value. We sometimes calibrate our readings to account for this.

You can consider spelling errors due to sloppy typing as being random, while persistently misspelling a particular word is systematic.

So what does this have to do with Ofqual?

The recent issues with the scoring of GCSE English coursework – discussed on twitter with the hashtag #gcsefiasco – are a good example of errors causing problems. But if we use the scientific approach to errors, it is much harder to blame teachers as Stacey has done.

Coursework is marked by teachers according to a markscheme, provided by the exam board. (It’s worth remembering that apart from multiple choice papers all external exams are marked in this way too.) An issue with controlled assessments is that teachers are unavoidably familiar with the marking guidelines, so can ensure students gain skills that should help them demonstrate their knowledge. This is after all the point of the classroom, to learn how it’s done. To complain that we ‘teach to the test’ is like criticising driving instructors for teaching teenagers how to drive on British roads.

Once the work of all students in a  cohort has been marked, the department will spend some time on ‘internal moderation’. This means checking a random sample, making sure everyone has marked in the same way, and to the standard specified by the markscheme. Once the school has committed to the accuracy of the marks, they are sent to the exam board who will specify a new random sample to be remarked externally. If the new scores match those awarded by the school, within a narrow tolerance, then all the scores are accepted. If not, then all will be adjusted, up or down, to correct for a systematic error by the department. There will still be a few random errors – deviations from the ‘correct’ score on specific essays – but these will be fairly rare.

The exam board then converts the coursework score, using a top secret table, into a percentage of the available marks. You may not need to get everything perfect to get an ‘effective’ 100% on the coursework element of the course. And dropping 2 of 50 on the raw score, as marked by the teachers, may mean more than a 4% decrease after conversion. This table will be different for different papers because some exams are harder than others, but changes should be minimal if we want to able to compare successive years.

So what happened last summer?

Students who had gained the same raw score on the same coursework task, which had been marked to the same standard as confirmed by the exam boards during external moderation, were awarded different percentages by the exam boards depending on when the work was sent in. This was after sustained pressure from Ofqual, possibly because using the same boundaries in June as they had in January would have resulted in ‘too many’ higher grades. This was not about a small number of random errors in marking. This was not about a systematic error by some or all schools, because the boards had procedures to identify that. This was about a failure by the exam boards and Ofqual to discreetly fix the results the way they intended to.

It is a basic principle in science that you cannot adjust your results based on what you want or expect them to be. You might be surprised, you might recheck your working, but you can’t change the numbers because of wishful thinking. If there was an error, it was by the exam boards and Ofqual, who showed that they could not specify what work was equivalent to a C grade.

The procedures were followed in schools. The exam boards agreed that the controlled assessments were marked to their own standards. And yet Ofqual still claim that it is the fault of us teachers, who prepared our students so well for the controlled assessment that we are being called cheats.

I’ve blogged before about the weaknesses built in to the science ISAs. The exam board and Ofqual are either too busy to read what one teacher has to say – perfectly reasonable – or don’t have an answer. I don’t understand how it is our fault when their system approved what teachers did and how they marked.

So maybe we shouldn’t be marking controlled assessments at all.

PS (This is the cue for the unions to step in. And they won’t. This is why we need one national professional body representing teachers, using evidence rather than political rhetoric.)

Doing an ISA with AQA

I’ve managed not to blog about GCSE ‘reform’ – despite great temptation. If you’ve not seen them, then I suggest comparing three very different viewpoints (in style as well as opinion) from LKMCo, Tom Bennett and NAHT. When I have time I might update my previous post, from the last time Gove announced a major policy by leaking the details to the Daily Mail.

For now, a quick ‘ideas’ post about using ISAs for good science teaching, and hopefully enabling kids to achieve. This is partly in response to questions from @NQT_diary, as it’s spurred me to turn the draft into an actual readable item.

Teachers’ Notes

  • the ISA involves lots of paper – maybe your department will be organised, but double check
  • make sure you practise the actual experiment, if for no other reason than to generate the ‘sample data’ needed
  • remember that the markscheme is now ‘best fit’; compare with colleagues if needed to make sure you are consistent as a centre, as this is arguably the most important aspect come moderation day
  • you can share more than you think with the students

Objectives

Perhaps somewhat idealistically, I try to use ISA teaching as a way to bring together lots of ‘bits’ of investigative science. Ideally, of course, you will have used all of the skills and language in regular lessons; that after all is the point. Make sure that KS3 pupils are familiar with at least some of the terminology. The practicals are straightforward (sometimes insultingly so) which means students can focus on their explanations and analysis. Make sure you are using the updated language; I have sometimes had pupils create their own version of this using a range of examples.

My Structure

  1. Introduction
  2. Research 1
  3. Research 2
  4. Preparation for planning exam (Section 1)
  5. Section 1 exam inc table
  6. Practical 1
  7. Practical 2 inc graph/chart
  8. Preparation for analysis exam (Section 2)
  9. Section 2 exam

There are lots of issues with the ISA, as I blogged a little while back. It is possible to use it effectively, but in some ways I feel the exam works against good teaching; this wouldn’t be a problem if it didn’t take so long!

Students will need to complete the ‘research notes’ pro forma to take into their Section 1 exam; I had them do a ‘rough’ version which meant they had lots of material to annotate while revising/preparing. How much you direct them to particular sites is frustratingly vague, but in my setting we provided a range of sourses, some deliberately not well-suited, to make sure they had to think critically. Once the table is marked you can provide a replacemetn if that suits the practical better, without penalty. This means they aren’t penalised if a poor table would stop them collecting useful data. After the practical, the data and graph/chart must be collected, and returned for the Section 2 exam. Along with a set of ‘sample data’ (you produce), the ‘Case Studies’ (supplied by AQA) and their Research notes. They need a big table.

While teaching I used GRR principles (skills development from literacy, more info coming soon) which focuses on productive collaborative work. This adds an explicit stage in the teaching of skills (rather than content):

  1. I do, thinking out loud
  2. We do together
  3. You do collaboratively
  4. You do individually

The same structure can be used for the preparation lessons for both exams, and this brings us to the most surprising part of the ISA. We can share the specimen papers with students, and the exams are very defined in style so that in many cases they are effectively identical to the specimen. So they can attempt the specimen questions, go through the markscheme with teacher support, then sit what they know will be a very similar exam about their own research and experiment.

This still seems weird to me.

The preparation for the planning and analysis exams can be done in similar ways:

  • Talk through the specimen context and model a possible question for them, linking to key definitions (5min)
  • Have them predict and write down 2/3 questions that could be asked about experiment or data (5/10min)
  • In small groups, give them part of the specimen paper and have them discuss main points (10min)
  • Write their answers individually to improve accountability (10min)
  • Go through markscheme, comparing good/intermediate answers, having them mark/annotate their answers (15min) If time, they could compare answers from students who had time to discuss with those who answered ‘cold’

This gives them the practice they need, as well as building the skills. Of course ideally we would use all these bits individually in other lessons! I’d love to hear from anyone with thoughts or comments about what I’ve suggested.

Enemies of Promise

This will be a short post, partly because I’ve got lots of other things on the go and partly because I’m too angry about what appears to have happened. I say appears because I truly hope things aren’t as they seem, for the sake of our students.

In January – and at points since then – Michael Gove has labelled teachers and others who criticise his plans as ‘enemies of promise’. This has been used despite the criticisms often being valid and fair, based on data rather than ideology, and often from those who clearly know far more about educational theory and practice than him.

It appears, from lots of conversation on twitter and in the press, that this year’s GCSE results show some unexpected features. Overall, they seems to be a little lower than in previous years, and one exam in particular seems to have affected English results. Students who completed the foundation controlled assessment papers in January needed a lower score to achieve a C than those who sat the equivalent exam in June. (This issue is one we have seen many times with the AQA Science equivalent, ISAs.) The difference is significant and means that many students nationally have failed to reach a Grade C despite being on track for it up until this point.

There are two issues here, one of which is immediately significant. Students who have failed to achieve a grade C in English will find that their next steps – college or sixth form courses, apprenticeships and so on – are now barred to them. This matters now. Many of them will have been expecting to confirm their education and training places in the next week or so. There is little time to address this problem, if things are really as unfair as they seem.

And things are unfair. Most teachers, most people, accept that more challenging courses are worthwhile. Students may not be happy with the idea, but the difficulty of achieving particular grades is effectively an arbitary choice. Changing it from year to year, or between exam boards, obviously makes comparisons and target setting much harder, but it is not unfair. Changing the grade boundaries, between the students sitting an exam and being given their grade – for students doing one particular course – is clearly very different. The press today have suggested it is like moving the goalposts not just during a football game, but after a penalty has been taken and before the ball crosses the line.

A cynic would suggest that the government see moving goalposts after the numbers are known is a standard political tactic.

The other issue – and today of all days, this must be seen as secondary to the plight of affected students – is that schools are judged on their results. Gove and the Department for Education can take greater steps to control what happens in a school if GCSE results drop below certain levels. A significant indicator are the the number of 5A*-C grades, including English and Maths, and the EBacc. Both of these will drop in schools which have had students marked down from a C to a D grade in English due to these eleventh hour changes.

I’m trying very hard not to be cynical. I don’t teach English, except in the sense that many teachers share favourite books, correct spelling or help with grammar. But like many others, I struggle to see the fairness in changing how students are graded, after they have studied and sat the exam. Their lower results will now make it easier for unpopular, non-evidence-based and rushed changes to be pushed through, including forced academisation. This means it is even more important to find out who ordered these grade boundary alterations.

Who are the enemies of promise now, Mr Gove?

Uncontrolled Assessments

Despite the title, a lot of this also relates to AS and A2 – Physics, at least. (They don’t trust me with squishy stuff or cooking.) It seemed a waste to spend time answering the call for evidence from Ofqual without blogging it too. Especially because it made me realise how messed up the whole ISA situation is. Maybe it’s much better in OCR and Edexcel; ‘my’ kids do AQA.

I’m not going through all my answers, just those that seemed particularly relevant, complimentary or critical. I’ll start off, because I’m a teacher, with the comment that when the question below is on an official Ofqual document, it seems odd that Gove wants students to lose marks for poor grammar:

“Please explain you answer.”

Weaknesses

There is no way to assess hands-on practical ability by the current method – the students are specifically told that their results are irrelevant. The students’ problem-solving abilities are tested by the practical itself, but this is not measured in any way.

In my opinion, there is very little difference, if any, to previous versions of controlled assessment – the results are no more ‘authentic’ than before. Teachers are still able to see the question papers in advance, and for AQA, section 1 is always effectively identical to the sample paper – which we can share, with the markscheme, with students. Unsurprisingly students end up rote-learning their answers. To be manageable in a real classroom all students end up doing a similar practical anyway, despite claims they would have more freedom. The ‘research’ section is useless, and students have frequently shown that the only helpful result when searching for a context is the teacher notes supplied by AQA. This means that the variety in support – which an AQA advisor agreed was ‘against the spirit but still allowed’ – shows that teacher efforts are still a big issue.

Workload/Management

The type of task has been badly managed by AQA, and is longwinded. Students must:

  1. research a hypothesis
  2. design an experiment and results table
  3. complete an exam paper on the design process
  4. complete the practical (modified if necessary)
  5. produce a graph or chart of results
  6. complete a second exam paper on their analysis.

Completing this process, at least for the first time, takes a minimum of 7 lessons in school (researchx2, paper1, practicalx2, graph, paper2) and probably more for each ISA. Students missing lessons due to illness or other commitments obviously makes the process more complex, a burden that inevitably falls on the teacher. In our timetable, students spend at least three weeks on one investigation, often only spending 1-2 hours on practical work. This is not sustainable.

It takes a while and there is a huge amount of separate pages they have to be provided with, especially for the final paper (their research notes, their results and graph, a group set of results, two pages of case studies). Each section must be marked BY THE TEACHERS and kept securely, then moderated.

The specific tasks set show that AQA have not tested out the likely experiments, nor checked online for research sources that are reasonably pitched for GCSE students. The results are frequently nonsensical with the equipment available in a school lab, meaning no useful pattern for the students to discuss. The practical tasks are very straightforward, about a Year7 level.

The marking of the work is a long process and standardisation is difficult. I would conservatively estimate each ISA takes at least 20 minutes per student (paper 1, table, graph, paper 2). For a class of 30 this is ten hours of marking for each ISA – and they are expected to complete at least two per year. We have currently been trying to complete them for year 9 and 10 classes. This means up to thirty hours of marking per colleague.

Conclusion

The tasks are badly designed and not fit for purpose – the skills could be assessed more effectively and more uniformly. It feels like the tasks have been tweaked and changed so many times that they are effectively designed to fail at their claimed purpose.

The workload is unmanageable in my subject. Exam boards would, I suspect, have spent a lot more time thinking about this if they were responsible for paying those marking it. I don’t know how the cost of sitting the module has changed compared to externally marked ones, but of course schools don’t have to pay overtime either. This means less time to prepare lessons and teach the rest of our classes. Several exams per ISA also makes it very difficult in terms of SEN entitlement, with extended time and TA support being difficult to schedule with a whole year group doing assessments in a range of subjects, taking up more and more time.

I know colleagues who have missed work due to finishing marking, or because of being ill after ridiculously long hours to meet deadlines.

It could be done so much better. Students gain very little in the way of real practical skills, and the majority of the questions in the exam are about things that could be in regular exams unrelated to the ‘experiments’ they have done.

Students are certainly stretched, in that some questions are so obscure – or the markscheme so erratic – that only a few achieve the highest marks. In some cases the only way a student could write an answer that gains all the available marks is if they had been coached by their teacher. They struggle to apply what they have learned outside of the ISA environment, because it is so unlike the practicals and investigations they do at other times.

Scientific content is at a low level in most cases, but the maths demand can be quite high. This penalises students who are capable of explaining trends and patterns but struggle with numbers.

 Constructive Feedback

Of course, as a teacher I feel the need to suggest a way to improve. I think it’s worthwhile that the ISA starts with a research task, but it needs to be standardised. How about a different structure:

  1. Produce a booklet of sources, some more useful than others, that students use to plan a practical in exam conditions.
  2. All students nationally do the same thing, from the same materials.
  3. A standard practical with a paper, using standardised experiment, like the old A-level set-up.

These exams could be sealed beforehand then externally marked, anonymously, removing at a stroke many of the major issues with internally-marked work. Several samples would need to be produced, but this would avoid the demand for more and more assessed pieces – time consuming for students and staff – in the hope of incremental improvements. The two main criticisms of the current system – that it fails to assess the skills fairly for all candidates, and that it takes too much time – are removed at a stroke.

Two other posts worth reading on this are by @hrogerson on OCR-B and by @DrRachael, also on AQA.

Maybe you have other suggestions?

Compare and Contrast

I’d just like to contrast two parts of the speech that Gove made (taken from the online document, not sure how closely he stuck to the script) at the Spectator Conference. I promise, after this I’m sticking to pedagogy for a bit, I’ve had enough of politics.

Michael Gove faced criticism from an unexpected source yesterday; his own speech to the Spectator Conference. In this he condemned those who used ‘alternative’ qualifications to exaggerate claims of performance or improvement, before quoting them himself.

For a decade now we have steered hundreds of thousands of young people towards courses and qualifications which are called vocational even though employers don’t rate them and which have been judged to be equivalent in league tables to one – or sometimes more – GCSEs, even though no-one really imagines they were in any way equivalent.

Adults who wanted to keep their positions, and keep their schools’ league table positions, used these qualifications to inflate their schools’ performance in these tables.

Later on, Gove praised several academies as part of his lead up to launching a funding boost for groups that take on ‘sponsor’ status.

…another Harris school – South Norwood – where 29% of pupils reached that measure [5 GCSEs at A*-C] in its last year as an LEA school; 100% last year.

The figures for 2011 he didn’t mention tell a very interesting story.

  • 75% of students achieved 5 GCSEs at A*-C including English and Maths.
  • 2% of students achieved the English Baccalaureate.
  • 46% of students achieved 5 GCSEs if we exclude ‘equivalent’ qualifications.

So if you discount the qualifications which Gove stated are in no way equivalent, his example is rather less impressive than he would like.

I notice that at the quoted school, the ‘average student’ is entered for 6 GCSE subjects, but has a total of 13.9 entries – that’s a lot of equivalents. These figures are from the BBC website, checked where possible at the DfE listing for the school (which interestingly does not list percentage results without equivalents.)

Please let’s be clear, this is not a condemnation, or even a criticism, of a specific school. Many schools have been encouraged – effectively forced – to change their entry patterns in order to boost league table scores, due to political pressure. This has also been seen with the EBacc so Gove can’t blame it on previous governments.

Gove’s Resit

I was already planning to type up a few more developments in the #govelevels saga. Reading that Gove is to blame pretty much everyone except politicians for the difficulties in the exam system just means I’m finding it a little hard to be balanced. I’ll do my best, because I partly wonder if his intent is to push teachers to react angrily rather than rationally to his proposals. The more we respond with rhetoric and ad hominem attacks – as tempting as it is – the harder it is to seem professional.

Basically, this is going to be a short post with links that I’ve already shared on twitter. I’d like to flag them up again for anyone who missed them the first time, and to take the chance to comment in a little more than 140 characters. If there’s time, I’ll also address some of the comments from my previous post, which got a lot more attention than I expected.

First of all, I’d like to direct you to @miss_mcinerney‘s blog, where she explains why Gove is wrong on the ‘bottom 25%‘. The calculation goes some way to address my concerns in terms of ‘borderline’ students. It looks as if the 25% figure was plucked out of the air, perhaps to appeal to the very Daily Mail readers the story was leaked to. Laura’s calculations suggest an absolute maximum of 10% of students would be best suited to not doing O-levels, unless Gove is planning to make them even more challenging than even he suggests. (Of course in Science we’ve already seen OfQual decrease student grades, demoralising students and making targets fairly useless: information here and here.)

This smaller proportion will potentially stigmatize the students even more, as well as making the cost per student of implementing them – in terms of teacher time and money – even greater. Of course, maybe Gove just can’t tell the difference between 25% and 10%, in which case a resit is needed. (Oops – they’re not allowed any more!) I’ve already linked to her original post but if you haven’t yet read it, I’d like to recommend it once more.

If I’d read this post by @dukkhaboy about why O-levels aren’t the issue before I’d written up my piece, it might have saved me a lot of time.  In particular it mentions something I passed over; each change in the specification means teachers can spend less time being innovative because they have to sort out the teaching scheme. Politicians seem oblivious to the thought that we might not be able to do this kind of thing as paid overtime.

Lots of interesting, reasonable responses, at least some of which are from people who know what they’re talking about, at the Guardian.

Warwick Mansell (@warwickmansell) has written a scathing critique of the National Curriculum review – it appears some of the same issues are present as with the ‘proposals’ for 14-16 exam changes. In particular, it seems ministers are ignoring the advice of professionals, the demands on teachers for writing local schemes, and the difficulties of implementing the changes in a short time. It’s as if the politicians haven’t a clue about the real world of education. The contrast between the evidence found and quoted in this article, and the very vague attempts at justification by Gove, Gibbs et al, is striking.

Thoughts, comments, ideas? Is Gove leaking such dramatic changes, as some have suggested, that more reasonable ‘official’ ones are accepted more easily. I suppose we’ll find out in time whether he has some evidence-based suggestions or if this has just been a way for him to bolster political support for a future leadership bid. I’ll leave you with that scary thought: that instead of being about children’s qualifications, this could all have been for political advancement. That’s the real weakness of having a Secretary of State who is a politician not an educator.

#govelevels

Well, it’s been an interesting few days. I offer no apology for my
knee-jerk reaction
to the news that our Secretary of State for Education, Michael Gove, was proposing a return to two distinct tiers (caution, DailyMail link) at age 16. I’ve spent the last few days trying to track the story, as much as a fulltime teacher with no political or journalism contacts can do. This is my more constructive response to his ideas, as far as we can understand them.

The ‘plans’

  • students will start new courses from autumn 2014
  • two tiers, to be modelled after but *not* called O-levels and CSEs
  • between 2/3 and 3/4 of students will do the higher level exams, presumably terminal, including English, Maths and Science
  • ‘less intelligent’ students will sit simpler ones, with no threshold of 5 GCSEs A*-C
  • no National Curriculum
  • a single exam board per subject

Context of the leak

Nick Clegg was totally unaware of the ideas and has declared that he will stop them coming into force. Of course, we know how effective the LibDems were with the NHS Bill. It seems unclear whether David Cameron was briefed – if so, it raises interesting questions about the cosy relationship between the Prime Minister and his coalition deputy. In many ways, it seems as if this has all grown out of Gove’s department – the chair of the Education Select Committee also seemed to be in the dark about the proposals. And that Gove has apparently failed to ask education professionals is hardly a surprise, considering the revelations from the Expert Panel.

According to Google, a leak is ‘an intentional disclosure of secret information’. The question here is who would have leaked material about something which is paradoxically so defined, and yet so vague. A civil servant in the DfE, perhaps? Or, as Andrew Neil suggested in the political programme ‘This Week‘, Michael Gove himself may have explained it to the editor tof the Mail himself. It makes me wonder if his intention was to test the waters ‘unofficially’, or if he hoped to force Cameron to support him against political opponents. If either of these is true, it has certainly been unsuccessful on the face of it. But then, I’m sure I miss the political nuances. Let’s get back to the practicalities.

The Reality

These are not all ridiculous ideas. It’s important that educators accept and recognise good ideas from politicians, even if they are uninformed, because otherwise our criticisms of the rest are less credible. So let’s deal with each aspect separately, starting with those I (personally) would consider less problematic.

Terminal?

We are already in the process of moving to a terminal exam in most subjects. How coursework and controlled assessments – like the absolutely horrendous, badly designed ISAs in Science – will work is not yet clear. In fact, like many teachers I’d really like to know how the exams will work for students starting courses in less than three months.

  • Will the content be ‘substantively’ the same, or really the same?
  • When will the terminal exams be?
  • Will students be able to resit the terminal exam for Science A (year 10ish) at the end of their year 11?

So a terminal exam is not necessarily bad, as long as politicians and the media understand this will inevitably lead to students getting lower grades. I’m currently looking for research comparing recall and understanding of students who sat module and terminal exams, afterwards. It seems likely that, like the O-levels Gove seems to remember with such fondness, this will mean relying more on memory than understanding. Some of the pressure is probably linked to universities’ claims that students lack depth of knowledge having done modular A-levels, even though degree courses are now almost entirely modular. Thanks to @prid09, who tracked down this paper from the IOP which suggests a link between skills decrease and the introduction of modules. (Of course, evidence that multiple choice tests may favour male candidates shows that any assessment method has difficulties.)I’m still reading through this huge paper from Cambridge Assessment, kindly found for me by @begration/@LearningSpy.

From a selfish point of view, I like the idea of having more teaching time, and less disruptions due to exams and resits. How this will affect student morale is yet to be seen. And I’m very curious about coursework/ISAs etc, which by definition are not terminal. I really hope that if we do move this way, we get much more in the way of sample papers. A single example is nowhere near enough, as we’ve found out each time. I still like my idea of a crowdsourced exam board. But we’ll see.

Goodbye National Curriculum and ‘GCSEs’

This is a complete red herring, as many have pointed out already – in particular I’d again recommend Chris Cook’s analysis in the FT. If there is to be only one exam specification per subject (discussed below) then this effectively sets the national curriculum, arguably more clearly than at present. It would be interesting to see what effect this has on subjects at KS3, and on non-examined subjects between 14 and 16 such as PSHE and PE. As far as I know, there’s no information about this so far – perhaps Gove hasn’t considered it?

The change of name is also a red herring. What he’s suggesting are not the same as the O-levels of the past. He’s just invoking their mystique without examining the very different aims and outcomes of those exams a generation ago.

Single Exam Boards

According to the Daily Mail article, existing boards will be invited to bid for the right to set the higher level exams. This would mean once a contract had been won, other boards would have little use for their subject-specific expert staff (as pointed out in the TES). Presumably exam boards would submit possible papers to Ofqual or directly to the DfE for consideration. There are definite strengths to a single board for a subject, reducing worries about a ‘race to the bottom’. But this is not an approach without concerns.

Textbooks would be written even more closely matched to the specification, perhaps especially for contracts gained by EdExcel/Pearson [corrected, thanks to Mary and DrDav for the proofread!] . There would need to be clear safeguards about firewalls between writers of books and papers. The current model does allow for innovation in assessment, as demonstrated by the OCR 21st century scheme. Would the board be able to offer variant courses or routes, like today’s AQA Core Science A and B courses? Would there be ‘similar’ courses, perhaps ‘Pure Maths’ and ‘Functional Maths’? Or would the more accessible papers be left to the weaker students?

Alpha or Epsilon?

This brings us to my biggest objection to the details of these proposals, rather than how they have come about. If the Daily Mail is to be believed – usually a risky proposition, but Gove has confirmed the details of the ‘leak’ to Parliament – then between a quarter and a third of students will sit a more basic exam.

This means labelling students at 14 as unable to compete. Even if we could be perfectly accurate – which of course we can’t – this has huge consequences.

  • students will lose motivation as they know they cannot improve beyond a D (as they can at present, even on Foundation)
  • those who struggle in exam situations (anxiety, perhaps, or a specific learning difficulty) will struggle when they cannot rely on coursework to help them
  • employers, without even looking at the grades achieved on these CSE equivalents, will write them off
  • parents will blame teachers for pre-judging their children at an early age.

If we assume that teachers can be 95% correct at age 14 about a student’s likely grade outcome – which is wildly optimistic – my setting would put more than ten each year into the wrong programme. Nationally, if 800000 students are in the year group (estimate from WolframAlpha) then that’s 40000 kids who get messed around. The issue is unlikely to be with the weakest students, but as usual those at the C/D borderline.

I would have a lot more interest in the idea if I believed that a quarter of Britain’s privately-educated children would be entered for these lower-level exams.

Gove seems to have made a huge conceptual error here, in harking back to the golden age of the O-level. The higher level exam was never intended for a large chunk of the population. It was for the elite, and it was designed for a world in which only a small proportion would need higher level skills. The rest were being educated for manual work, factory and field labour, in a world without greater prospects. Do we really think that a quarter of our kids should be limited by the exams they are selected for at age 14 to these kinds of jobs? Do we really believe – does Gove really believe – that this is the world in which we live?

Frankly, the thought of being asked to select which quarter of my year 9 students won’t have a chance to do A levels or go to university scares the hell out of me. Of a class of 30, I could confidently choose 3 or 4. But 8? Too many things change between 14 and 16, too many kids suddenly start working, or get over problems, or grasp the maths they need to get that C grade. It’s not about the future medics or lawyers. It’s about the ordinary, unexceptional – the ones Gove seems not to know or care about.

Evidence

Don’t worry, the rant is nearly over. This final criticism is about the way in which these plans were drawn up, rather than their advantages or disadvantages. Gove appears to think that because he has been to school he knows enough to run all British schools. He listens to heads of private schools, and to journalists and politicians. He talks about a ‘gold-standard’ and makes deliberately bad comparisons between O-level questions and those at foundation tier GCSE. He ignores the experts, the educators, the teachers who will somehow have to implement his ideas. He fails to look at data which would inform a sensible choice, or allow time to collect more results so that we can make our education system work better. He overrules governors who object to his view about what is best for children, but claims he wants decisions to be local. He claims to want teachers to be involved, then criticises them at every turn.

If we want to make schools better, to make education work better for our students – rich and poor, smart and weak, north or south – then we need to look at facts. That means research, properly collected and carefully analysed. It means accepting that not all interventions will work. It means following the evidence, not the ideology. It means thinking hard about what we want our education system to achieve, what kind of 16 or 17 or 18 year old we want to produce. It means ignoring wishful thinking and rose-tinted memories, personal prejudices and media rabble-rousing.

Because in the end, our kids – mine and yours, our students and our offspring – deserve better than this.

Edit: I recommend checking out a contrasting viewpoint at @lauramcinerney’s blog.

#500words from Michael Gove on #purposed

I’m sure my readers, if there are any, are familiar with the fantastic 500words project running to get people thinking about the purpose of education. I wrote my own contribution a while back, but today – because of all the fuss of the last few days about #govelevels – I wanted to write one from another point of view.

Obviously this is a parody, and I intend to write something more constructive over the weekend. If you’ve not already seen it, I recommend @xtophercook‘s excellent and prompt post. In the mean time, I hope this makes you smile. Or snarl. Or something.

#500words from Michael Gove

I dream of a Britain in which bright students are challenged with world-class exams. I want to live in a country where teachers prepare these pupils in quiet classrooms, and so do the people of this country. I want our teachers to learn from the best of the private sector, and from countries like Finland and Singapore. Learn the bits about recruiting the highest achievers, of course, not about taking responsibility back from the state or abolishing inspections.

I want our pupils to achieve the very best that they are capable of, according to a single assessment at age thirteen. The purpose of education is to prepare students for the real world, a world in which an offhand judgement at a young age cannot be appealed or changed. I want clearly able students, making up the top three-quarters of the population, being tested on their memory and ability to parrot back their notes. Or maybe the top two-thirds. Because the purpose of education is surely to encourage competition and excellence, and we don’t want too many doing the useful exams. These gold standard exams will be a familiar sight for those of us who studied at the best schools, due to ability, hard work or having parents who sent us there. That such students then receive a one-sided view of education is in no way a disadvantage, as they can then make judgements without being bothered by facts. These young people, who will disproportionately live in the South and come from higher income families, will learn a great many facts. This is clearly what schools are for, although equally clearly the facts in question must be chosen by politicians, as teachers can’t be trusted to think for themselves. Furthermore, it is obvious that these facts should be about classics of English Literature, classical music and obscure dead languages.

For the remaining pupils, we expect a different outcome, suited to their lower abilities. By choosing their future path early on, there will be more time to prepare them appropriately. Their education must focus on relevant skills, because they will have no chance to continue to A-levels or university. Unlike the old ‘foundation tier’, part of the failed GCSE experiment, there is no danger that they will work hard and achieve a C grade, or even a B. We don’t want to encourage that sort of attitude – far better that they are clearly told, by politicians and teachers, that they need have no aspirations beyond menial jobs and being denied state support. Education is about preparation for life, and far better that these students are labelled with an exam so that no matter how hard they work, it is clear that they were second-class students from an early age. Giving people chances to improve themselves is rarely successful, and encouraging achievement for all is only useful as long as we don’t actually help them succeed.

So this is the purpose of education – to celebrate those who are the best at what I think is important.