Required Practicals

Morning all. I was at the Northern #ASEConf at the weekend, had a good time and had lots to think about. I’m going to try really hard to blog it this week, but I’m buried under a ton of stuff and pretty much every person in my immediate family is either ill, recovering or about to go into hospital. And Trump apparently won, which makes me think it’s time to dig a fallout shelter and start teaching my kids how to trap rabbits for food.

Anyway.

One of the recurring discussions between science teachers is about the new required practicals for the GCSE specs. I’m trying to put some resources together for the physics ones as part of my day job, on TalkPhysics (free to join, please get involved) and thought I’d share a few ideas here too.

Who Cares?

The exam boards don’t need lab books. There is no requirement for moderation or scrutiny. There is no set or preferred format. And, realistically, until we’ve seen something better than the specimen papers there’s no point trying to second-guess what the students will be expected to do in the summer of 2018.

So apart from doing the practicals, as part of our normal teaching, in the normal way, why should we do anything different? Why should we worry the kids about them? Why should we worry about them? There’s time for that in the lead up to the exams, in a year’s time, when we’d revise major points anyway. For now, let’s just focus on good, useful practical work. I’ve blogged about this before, and most of it comes down to more thinking, less doing.

Magic Words

What we can do is make sure kids are familiar with the language – but this shouldn’t be just about the required practicals. So I put together some generic questions, vaguely inspired by old ISAs (and checking my recall with the AQA Science Vocab reference) and ready to print. My thinking is that each laminated card is handed to a different group while they work. They talk about it while doing the practical, write their answers on it, then they get added to a wall in the lab. This offers a quick review and a chance for teachers to see how ids are getting on with the vocab. The important thing – in my view, at least – is that it has to be for every practical. This is about improving fluency by use of frequent testing. And it ticks the literacy box too.

EDITED: more cards added, thanks to suggestion from @tonicha128 on Twitter.

So here you go: prac-q-cards-v2 as PDF.

Please let me know what you think, whether I’ve made any mistakes, and how it works if you want to try it out. It would be easy to produce a mini-test with a selection of these questions, or better ones, for kids to do after each practical. Let’s get them to the stage of being so good with these words that they’re bored by being asked the questions.

Advertisement

You’re Welcome, Cambridge Assessment

It’s not often I can claim to be ahead of the trend. Pretty much never, to be honest. But this time I think I’ve managed it, and so I’m going to make sure all my readers, at least, know about it.

Recently the TES “exclusively reported” – which means other sites paraphrased their story and mentioned their name, but didn’t link – that Cambridge Assessment was considering ‘crowd-sourcing’ exam questions. This would involve teachers sending in possible questions which would then be reviewed and potentially used in external exams. Surplus questions would make up a large ‘question bank’.

I suggested this. This is, in fact, pretty much entirely my idea. I blogged ‘A New Exam Board’ in early 2012 suggesting teachers contribute questions which could then provide a range of sample papers as well as external exams. So it is not, despite what Tim Oates claims, a “very new idea.” Despite the similarity to my original post I do, however, have some concerns.

Backwards Backwards Design

So instead of teachers basing their classroom activities on giving kids the skills and knowledge they need to attempt exam questions, we’re doing it the other way around? As I’ve written before, it’s not necessarily a bad thing to ‘teach to the test’ – if the test is a good one. Writing exam questions and playing examiner is a valuable exercise, both for teachers and students, but the questions that result aren’t always helpful in themselves. As my OT-trained partner would remind me: “It’s the process, not the product.”

Credit

Being an examiner is something that looks good on a CV. It shows you take qualifications seriously and have useful experience. How can teachers verify the work they put into this? How can employers distinguish between teachers who sent in one dodgy question and those who shared a complete list, meticulously checked and cross-referenced? What happens when two or more teachers send in functionally identical questions?

Payment

A related but not identical point. How is the time teachers spend on this going to be recognized financially? And should it be the teacher, or the school? Unless they are paid, teachers are effectively volunteering their time and professional expertise, while Cambridge Assessment will continue to pay their permanent and contract staff. (I wonder how they feel about their work being outsourced to volunteers…)

Quality

It’s hardly surprising at this early stage that the details aren’t clear. One thing I’m interested in is whether the submissions shared as part of the ‘questions bank’ will go through the same quality control process as those used in the exams. If so, it will involve time and therefore money for Cambridge Assessment. If not, it risks giving false impressions to students who use the bank. And there’s nothing in the articles so far to say whether the bank of questions will be free to access or part of a paid product offered.

Student Advantage

Unless there are far fewer ‘donated’ questions than I’d expect, I don’t think we will really see a huge advantage held by students whose teachers contributed a question. But students are remarkably sensitive to the claims made by teachers about “there’s always a question on x” or “it wasn’t on last year’s paper, so expect y topic to come up”. So it will be interesting to see how they respond to their teachers contributing tot he exam they’ll be sitting.

You’re Welcome

I look forward to hearing from Cambridge Assessment, thanking me for the idea in the first place…

 

Data Analysis Questions

As I mentioned in my previous post, I’ve recently been doing some freelance work in a local school. The role is short-term and has an interesting mix of aims, but one part is to work with Year11 students on data analysis questions. Now, obviously I’ve taught these skills before. But I’ve not previously used the OCR B specification before, which features a final data question worth ten marks. I know this is running out soon but thought it might be worth sharing what I’ve created.

Firstly, a plea to all exam boards. When you release Examiners’ Reports – which are really useful, please keep doing it – can you combine them with the markscheme for easy reference? It’s something I’ve done for a while but it would make much more sense for you to do it.

2014

2013

Specimen

Predictably, the specimen paper isn’t a great example to use. I’ve not included the 2015 paper because many schools will be using it for preparation in controlled conditions. The links above are to my own copies in case OCR rearranges their site with the new specifications, and I’ve added the Section D page details to the filenames to make life easier for colleagues.

It seems a good time to remind you all that in the past I produced quite a few resources for looking at past exam papers, mostly AQA. The tags on the right should make it fairly easy to find them.

When we used these in class, one of the outcomes was that students put together a list of “things to try if you’re stuck”. Now, for many pupils this will have been built in to their teaching, but we all know that kids don’t always absorb what we’re hoping they will. I think the real value of this is to generate a list with your own students, but for your interest:

  1. Highlight or underline numbers in the question
  2. Draw lines from the axes at specified values so you can find the corresponding value
  3. If the question is about differences, you’ll need to add or subtract
  4. If the question is about rates or uses the word ‘per’, you’ll need to divide or multiply and you might need to think about gradient or slope

Comments and suggestions welcome, as always.

 

 

GCSE Practicals

You’ll already know that the assessment of practical work is changing. (I recommend this article by Alistair Moore and this at the RSC from @MaryUYSEG for useful perspectives.) At A-level it’s changed already, as part of many other alterations. The ISAs are gone for post-16, and it’s fair to say that most teachers aren’t going to miss them. At GCSE these changes will be part of the new specification which officially starts in September 2016, and which many schools have already started to use for their Year 9 students. Which is brave, when they’ve not been approved yet! If you’re teaching A-level Physics I’d recommend the resource created by one of my day-job colleagues at the SPN and available to all.
Different exam boards are taking different approaches, but there’s a big overlap. Each has a list of practicals which are required/recommended/suggested, and students will need to have a signed form of some kind which says they’ve done them. This means they’ll have had the opportunity to gain all the relevant skills (according to OfQual) which will be a pass/fail ‘extra’ to the grade. I predict, somewhat cynically, that the vast majority of students will have gained these skills on paper no matter how much their lab work resembles that of Beaker from the Muppets. 15% of the final exam marks will be awarded for students demonstrating in a written exam that they can think like a scientist, probably in a similar way to the ISA papers.
The list of practicals is a minimum expectation – a lower limit rather than an upper one. Most are ones we have always done, in one form or another. Students don’t have to work independently on all of them, or in exam conditions. They need not (and in my opinion should not) do them as a separate unit or topic but as part of their normal experience of science, alongside science content and social context. There is no specific way they are expected to write them up or record their results.
My plan is to create a resource list for each of the GCSE Physics practicals, drawn from AQA, Edexcel and OCR. These are my interpretation and, certainly at the moment, I’m doing them in my own time for no charge. (If anyone would like them sooner and/or to sell, contact me with a price in mind.)

Exam Paper Debriefs (Summer 2012)

I’m combining two resources into one post here, but hopefully they should still show up by searching. (He types, hurriedly adding some tags.) I’ve made two powerpoints, each matched to what I think are the easy marks available on the summer 2012 P1 and P2 exams from AQA. Useful as practice or as full mocks, I often have students go through them focusing on what they should all aim for, before checking through in more detail. Having students divide their missed marks (using this exam paper debrief pdf) into recall failures and method mistakes can be helpful.

If students are able, they could also be pointed towards the examiners’ reports, which are only available if you go through the subject link at AQA rather than the direct Past Papers route. If not, then this is our job anyway – perhaps something to consider as part of a backwards design approach?

P1 june2012 easy as ppt, for the P1 summer 2012 exam – see also my P1 summary activity.

P2 may2012 easy as ppt, for the P2 summer 2012 exam – see also my P2 summary activity.

And yes, before you ask – I am working on equivalent resources for more recent exams, hopefully to be done before we all need them for mocks. Although the summer 2013 papers haven’t shown up yet – is that because, without January 2014 papers to use, AQA are expecting those to be used as mocks too? Must check e-AQA… (adds to evergrowing to do list)

Finally; yes, I’ve been fairly quiet and quite down as of late; lots going on, I’ll be fine, send chocolate and coffee if feeling helpful. As that’s pretty much all I’ve been eating for a while, supplies are running low!

 

 

Generating Electricity (the YorkScience way)

If you’re a science teacher and not yet aware of the approach being trialed and developed as part of YorkScience, you should probably go have a look. My understanding is that it revolves around two linked ideas:

  • If we start by considering how we will assess competence, we can make sure we use appropriate activities to develop that competence.
  • Students should have access to everything we can provide about how answers are judged, so they can focus on what separates good and better answers.

To a novice, outsider or government minister, this may look like ‘teaching to the test’. The problem is that using that phrase shows the speaker thinks the test is no good. There’s no reason why we shouldn’t teach students to perform well in a test, if the test shows us something useful about the students’ knowledge and skill.

Anyway, political point made, let’s get on with the show.

I’ve produced (or more accurately, mostly finished producing) a resource which teachers can use to assess and develop understanding of generating electricity. There are two pdfs. The first is intended to make a booklet (4 sides of A4) which students write on over the course of a lesson and homework. The lesson plan would be as follows:

Starter: “Wind or Nuclear Power?” on the board, with cartoons or images if preferred.

Main:

  1. Students attempt multiple choice question in exam conditions. Allow time to fill in the ‘Why’ section too – perhaps ten minutes overall?
  2. Model the process of using the confidence grid using the first part of the exam question, ideally projecting the pdf and, by discussion, ticking appropriate boxes.
  3. Students work together in pairs or small groups to finish the grids.
  4. The second resource generating electricity diagnostic incomplete ms leads students through different (but overlapping) activities depending on which answers they chose. This is intended to mimic the teacher judgement which means you explain things in different ways depending on how and why a student made a mistake. This so far only has the first part (of four) completed.
  5. Discuss and compare the notes students have made to support them in each answer.
  6. Start (and probably set for homework) the notes summary on the last page of the booklet. This includes key words for prompts and gives some suggestions of format.

Plenary:

  • Which would you choose, Wind or Nuclear Power? Students must back up their opinion with a (revisable) fact.
  • What exam skills have we practised today?

I’m hoping to post the full version of the markscheme pages, as soon as they’re done. This may be completed as an extension activity by my triple group. 🙂 Comments and suggestions welcome, please let me know what you think.

P1 Summary Activity

To be honest, this is long overdue but it’s been a bad month. Lots of other stuff going on, not all school-related – which also accounts for my fairly low output on Twitter. Which you’ve probably all enjoyed. 🙂

Anyway; one revision activity, like the others. This may be useful to help kids note down main points, check understanding, test themselves etc etc for the AQA P1 exam. Some will be doing it in January, some in the summer. Either way, hope it’s useful – please let me so if it is.

  P1 Revision Activity as a pdf

 

Ofqual’s Absolute Error

In science lessons we teach students about the two main categories of error when taking readings. (And yes, I know that it’s a little more complicated than that.) We teach about random and systematic error.

Random errors are the ones due to inherently changing and unpredictable variables. They give readings which may be above or below the so-called ‘true value’. We can make allowances for them by repeating the reading, keeping all control variables the same, then finding a mean value. The larger the range, the bigger the potential random error – this is now described as the precision of the reading. I sometimes have my students plot this range as an error bar.

A systematic error is an artifact of the measuring system. It will be consistent, in direction and size (perhaps in proportion to the reading, rather than absolute). A common type is a ‘zero error’, where the measuring device does not start at zero so all readings are offset from the true value. We sometimes calibrate our readings to account for this.

You can consider spelling errors due to sloppy typing as being random, while persistently misspelling a particular word is systematic.

So what does this have to do with Ofqual?

The recent issues with the scoring of GCSE English coursework – discussed on twitter with the hashtag #gcsefiasco – are a good example of errors causing problems. But if we use the scientific approach to errors, it is much harder to blame teachers as Stacey has done.

Coursework is marked by teachers according to a markscheme, provided by the exam board. (It’s worth remembering that apart from multiple choice papers all external exams are marked in this way too.) An issue with controlled assessments is that teachers are unavoidably familiar with the marking guidelines, so can ensure students gain skills that should help them demonstrate their knowledge. This is after all the point of the classroom, to learn how it’s done. To complain that we ‘teach to the test’ is like criticising driving instructors for teaching teenagers how to drive on British roads.

Once the work of all students in a  cohort has been marked, the department will spend some time on ‘internal moderation’. This means checking a random sample, making sure everyone has marked in the same way, and to the standard specified by the markscheme. Once the school has committed to the accuracy of the marks, they are sent to the exam board who will specify a new random sample to be remarked externally. If the new scores match those awarded by the school, within a narrow tolerance, then all the scores are accepted. If not, then all will be adjusted, up or down, to correct for a systematic error by the department. There will still be a few random errors – deviations from the ‘correct’ score on specific essays – but these will be fairly rare.

The exam board then converts the coursework score, using a top secret table, into a percentage of the available marks. You may not need to get everything perfect to get an ‘effective’ 100% on the coursework element of the course. And dropping 2 of 50 on the raw score, as marked by the teachers, may mean more than a 4% decrease after conversion. This table will be different for different papers because some exams are harder than others, but changes should be minimal if we want to able to compare successive years.

So what happened last summer?

Students who had gained the same raw score on the same coursework task, which had been marked to the same standard as confirmed by the exam boards during external moderation, were awarded different percentages by the exam boards depending on when the work was sent in. This was after sustained pressure from Ofqual, possibly because using the same boundaries in June as they had in January would have resulted in ‘too many’ higher grades. This was not about a small number of random errors in marking. This was not about a systematic error by some or all schools, because the boards had procedures to identify that. This was about a failure by the exam boards and Ofqual to discreetly fix the results the way they intended to.

It is a basic principle in science that you cannot adjust your results based on what you want or expect them to be. You might be surprised, you might recheck your working, but you can’t change the numbers because of wishful thinking. If there was an error, it was by the exam boards and Ofqual, who showed that they could not specify what work was equivalent to a C grade.

The procedures were followed in schools. The exam boards agreed that the controlled assessments were marked to their own standards. And yet Ofqual still claim that it is the fault of us teachers, who prepared our students so well for the controlled assessment that we are being called cheats.

I’ve blogged before about the weaknesses built in to the science ISAs. The exam board and Ofqual are either too busy to read what one teacher has to say – perfectly reasonable – or don’t have an answer. I don’t understand how it is our fault when their system approved what teachers did and how they marked.

So maybe we shouldn’t be marking controlled assessments at all.

PS (This is the cue for the unions to step in. And they won’t. This is why we need one national professional body representing teachers, using evidence rather than political rhetoric.)

Enemies of Promise

This will be a short post, partly because I’ve got lots of other things on the go and partly because I’m too angry about what appears to have happened. I say appears because I truly hope things aren’t as they seem, for the sake of our students.

In January – and at points since then – Michael Gove has labelled teachers and others who criticise his plans as ‘enemies of promise’. This has been used despite the criticisms often being valid and fair, based on data rather than ideology, and often from those who clearly know far more about educational theory and practice than him.

It appears, from lots of conversation on twitter and in the press, that this year’s GCSE results show some unexpected features. Overall, they seems to be a little lower than in previous years, and one exam in particular seems to have affected English results. Students who completed the foundation controlled assessment papers in January needed a lower score to achieve a C than those who sat the equivalent exam in June. (This issue is one we have seen many times with the AQA Science equivalent, ISAs.) The difference is significant and means that many students nationally have failed to reach a Grade C despite being on track for it up until this point.

There are two issues here, one of which is immediately significant. Students who have failed to achieve a grade C in English will find that their next steps – college or sixth form courses, apprenticeships and so on – are now barred to them. This matters now. Many of them will have been expecting to confirm their education and training places in the next week or so. There is little time to address this problem, if things are really as unfair as they seem.

And things are unfair. Most teachers, most people, accept that more challenging courses are worthwhile. Students may not be happy with the idea, but the difficulty of achieving particular grades is effectively an arbitary choice. Changing it from year to year, or between exam boards, obviously makes comparisons and target setting much harder, but it is not unfair. Changing the grade boundaries, between the students sitting an exam and being given their grade – for students doing one particular course – is clearly very different. The press today have suggested it is like moving the goalposts not just during a football game, but after a penalty has been taken and before the ball crosses the line.

A cynic would suggest that the government see moving goalposts after the numbers are known is a standard political tactic.

The other issue – and today of all days, this must be seen as secondary to the plight of affected students – is that schools are judged on their results. Gove and the Department for Education can take greater steps to control what happens in a school if GCSE results drop below certain levels. A significant indicator are the the number of 5A*-C grades, including English and Maths, and the EBacc. Both of these will drop in schools which have had students marked down from a C to a D grade in English due to these eleventh hour changes.

I’m trying very hard not to be cynical. I don’t teach English, except in the sense that many teachers share favourite books, correct spelling or help with grammar. But like many others, I struggle to see the fairness in changing how students are graded, after they have studied and sat the exam. Their lower results will now make it easier for unpopular, non-evidence-based and rushed changes to be pushed through, including forced academisation. This means it is even more important to find out who ordered these grade boundary alterations.

Who are the enemies of promise now, Mr Gove?

A New Exam Board?

We’ve seen a lot of problems with exams recently – just look at the problems last summer with mistakes in a wide range of exam papers. Today I’ve found that AQA have spent so little time checking that suitable research sources are online that the only good Google results are their own teacher notes, and a primary science investigative cartoon. On top of this, a new specification inevitably means a lack of practice material which means students and teachers don’t really know what to expect.  If you have to explain why this is unfair to non-teachers, perhaps this analogy might help; we wouldn’t expect to have a driving test on the road having only practised in car parks, would we?

I have an idea.

In fact, I have two ideas, neither of which is mine. If we take the ‘backward design’ principle (originated by Wiggins and McTighe, introduced to me by Robin Millar’s work) and combine it with a ‘curated crowdsourced’ model, maybe there’s a way to do a better job. 

Backward Design

My apologies to Robin and other experts if I miss the subtleties – I’m just a classroom teacher with delusions of writing grandeur. Instead of beginning a syllabus with the content that we want to teach, backward design asks what we want students to be able to do at the end – how will they be tested? How will we know if the course was successful or not (or more precisely, how successfully the student has completed it)? If we create assessment tasks that will allow us to differentiate between students – ideally including, but not limited to written exams – then we can develop a list of what students should learn, which gives us a list of possible learning/teaching activities. As Robin and others point out, ‘teaching to the test’ is only a problem if the test is not fit for purpose. If we produce a realistic, useful test then being prepared for it is a positive thing. 

Crowdsourcing

So who better to contribute possible questions than teachers? Imagine a Google form set up by a new exam board; let’s call it CCEB. Exemplar material, based on accepted good practice, shows how to lay out mathematical working. Questions are entered, with a markscheme. Dropdown boxes allow those entering the question to define marks available, and from key words describing the area(s) of science being tested. Active teachers, retired staff, academics – even students – all can contribute. The contributions are freely given on the basis that the results will be freely available as far as practical, probably via Creative Commons licensing.

Curated

When a certain threshold is reached – which if every science teacher in the UK supplies a single question, won’t take long – the submissions are sorted by category and checked by CCEB staff. Because they are being proofread rather than written, it will be quicker and easier. If you have some of the original contributors – determined by random allocation – paid for a day’s work, they can be pre-moderated as well. Mathematical questions can be kept in the same form but with different numbers substituted. A large pool of questions is now complete, ready for the exam, which can be balanced between topics. There will be enough questions, all produced at the same time, for several specimen papers to be made available. With a large enough pool, you could even make all the questions open source, like those for the theory element of the UK driving test.

One Day 

It’s feasible that in the future, with enough questions available, every student could get a different but equivalent exam, as described in John Barnes’ book Orbital Resonance.

In the meantime, maybe we as science educators can get involved with setting better exams than the ones we complain about. The exam boards could ask for submissions in this way now. The cynic in me thinks that this would make it much harder for them to justify their existence. Maybe they would like to prove me wrong.