Moving Beyond Predict/Observe/Explain

I don’t remember when I first used the idea of breaking down a demonstration for students by having them follow the POE format:

  • Predict what will happen
  • Observe what actually happens
  • Explain it in context

I think a lot of science teachers used this before – or even without – referencing the ideas of Michael Bowen, who explains the approach in this video. He wasn’t the first, but I tracked down the link via the site of the National Science Teachers Association in the US. There are several papers available there, for example this from a decade ago about hypothesis-based learning, which makes explicit the difference between a hypothesis and a prediction. It’s easy to see how these steps link nicely with a 5/7Es planning method. But I think it’s worth adding some steps, and it’s interesting to see how it might have developed over time. How students cope with these stages is an easy way to approach formative assessment of their skills in thinking about practicals, rather than simply doing them.

Please note – I’m sure that I’m missing important references, names and details, but without academic access I simply can’t track original papers or authors. My apologies and please let me know what I’m missing in this summarised family tree!

PEOE: I think this because

To stop students making wild speculations we need to involve them in a conversation justifying their predictions. I suppose this is a first step in teaching them about research, to reference their thoughts. I find this needs guidance as many students mix up the two uses of explain; the derivation of their prediction and the link to accepted theory.

PODME: Recording what we observe

I got this from Katy Bloom (at York SLC, aka @bloom_growhow) I think after chatting at a TweetUp. I’m paraphrasing her point: in Science it’s not enough simply to observe, we must also share that observation. This can take two forms, Describing in words and Measuring in numbers. The explanation then becomes about the pattern rather than a single fact or observation. Bonus points to students who correctly suggest the words qualitative and quantitative for the observations here!

PBODME: My current approach

I’ve tweaked this slightly by making the first explanation phase explicit. The display is on the wall and students can apply this (with varying degrees of success) from year 7 practicals with burning candles to year 13 physics investigations into gamma intensity affected by thickness of lead shielding.

  • Prediction of outcome
  • Because of hypothesis based on life experience, context or research
  • Observation using senses, measuring devices
  • Description in words of what typically happens (sometimes as commentary during practical)
  • Measurement using appropriate units, with derived results and means where needed
  • Explanation of results, patterns, anomalies and confidence

Is it getting ungainly? Having this structure means students can see the next step in what they are doing, and are hopefully able to ask themselves questions about how to develop a practical further. I suppose you could argue that the original POE approach is the foundation, and these stages allow us to extend students (or ideally allows them to extend themselves).

PBODMEC: Why does it matter?

In many ways, the natural next step would be about Context – why should we care about the results and what difference do they make to what we know, what we can do or what we can make?

I plan to follow up this post with the printable resources (wall display and a student capability checklist) but they’ll have to wait until I’m home. In the mean time, I’d welcome any thoughts or comments – especially any with links to other formats and their uses in the school science lab.

Advertisements

Maths Skills For Science Lessons

After taking part in a recent online CPD trial with the Yorkshire and Humber Science Learning Centre, I’ve been trying to find ways to help my students use their maths skills in a science context. (And no, this wasn’t prompted by the recent SCORE report.) As we discussed during the course (and yes, I want to blog about it in more detail) the issue isn’t always that they don’t have the skills – it’s that they don’t use them. Sometimes it’s about language differences (positive correlation vs directly proportional, for example) and sometimes it’s just some kind of mental block. I’m trying a few different things:

  • providing science formula sheets to Maths to use for practice in lessons
  • producing data sets that they can use in Maths lessons
  • display work highlighting similarities and differences between science and maths vocabulary

But the focus for the blog post is something different. I’ve produced (but not yet finished trialling) a booklet for students to use and refer to in Science lessons. It covers a few areas identified by students and colleagues as causing problems. Each page includes an explanation, worked examples, hints and tips, possible applications and practice exercises. I’m making it available here in this untested state for comments, suggestions and improvements; click on the image for the pdf.

To Come (Hopefully)

  1. Corrected version if (when?) you find problems with it, with included pages for write-on answers/notes
  2. Markscheme/answer booklet
  3. Accompanying A4 display pages with extracts
  4. Additional pages if sufficient (polite) demand

I’d really love some feedback on this, everyone – please comment with improvements and suggestions.

Jim Al-Khalili as guest lecturer

Like me, you may have just watched the repeat of Chemistry – A Volatile History (Episode 2) on BBC4. By a happy coincidence, my Year 8 class are currently studying the periodic table (They already love The Elements Song) so I now plan to have @jimalkhalili in as a guest lecturer this week. Just watching something is a pretty boring (not to say ineffective) way to learn, so thought I’d share a few ideas and the questions I’m planning to use, on the off chance someone else might find them useful. Below are some ideas pinched from my earlier blogpost ‘Constructive Laziness‘:

  • Give them the questions first.
  • Provide a list of key terms (out of sequence) and ask them to note down definitions and/or examples.
  • Ask them to produce summary notes, perhaps using a Cornell blank.
  • Have them write a review for the BBC Bitesize website.
  • Ask them to choose headings for a Powerpoint that they can then write for homework.
  • Give them handouts using Powerpoint that have titles, but no content. This is another way to give them the framework for the notes. (Differentiated versions easily produced.)
  • Tell them it is old or out of date. What mistakes can they spot? How would they script an improved version?

For this episode, a few notes that are easily copied, then turned into a handout of questions, copied on to a whiteboard (leave up as the starter and see how many they can answer afterwards, no writing allowed), or just read out. All times are approximate, jotted as I watched, scribbled questions and scoffed my tea. A lot of these are fairly trivial, and I’d suggest using only a selection – perhaps as a stimulus to inspire students to write their own, more useful questions. (I’m probably going to try out the Question Formulation Technique as described in @totallywired77’s blog post.)

I’d suggest skipping first 2.5 minutes (until credits) as it spoils the surprises.

Up to 15min: Dalton and atomic weight

1 How many elements were identified at the early part of the century?

2 What was Dalton’s main social hobby and when did he do it?

3 What did Dalton call the particles we call ‘atoms’?

4 Which colour balloon drops quickly and why?

5 What does STM stand for?

6 How hot is the glass used to make the round bottomed flask?

7 What is the most common element in the Earth’s crust?

15-30min Patterns

8 How close was Berzelius to the true weight of chlorine?

9 How many elements were in each group suggested by Dobereiner?

10 What is the second element tested in the water?

11 How many elements were known when Mendeleyev started to investigate?

12 In which year did John Newlands present his ‘octaves’ idea?

13 Which 2 gases does the presenter say smell similar?

30-40min Mendeleyev

14 What fraction the books in Mendeleyev’s study are about chemistry?

15 What did Mendeleyev call his card game?

40-50min Spectroscopy

16 How did Rubidium get its name?

17 How did spectroscopy help to confirm Mendeleyev’s table?

18 What is the atomic weight of the gas first discovered in the spectral lines of the sun?

50min-end Inside the Atom

19 What was Bohr’s chosen sport?

20 How many electrons in the first ring/shell/orbital?

21 Which was the heaviest known element at the time?

22 Which three metals does the presenter test?

23 What particles did Moseley count in the nucleus?

24 How old was Moseley when he died?

  1. 55
  2. bowls, Thursday afternoons
  3. ‘ultimate particles’
  4. yellow, because it contains (dense) Krypton
  5. Scanning Tunnelling Microscope
  6. About 1000 degrees Celsius
  7. Oxygen
  8. a fifth of a percent
  9. three (triads)
  10. Sodium (Na)
  11. 63
  12. 1866
  13. Chlorine and Bromine
  14. A tenth
  15. Chemical Solitaire
  16. The spectrogram shows a ruby red light
  17. Elements that filled the gaps in the table were discovered, matching Mendeleyev’s predictions
  18. 4 (He)
  19. Football (goalkeeper)
  20. 2
  21. Uranium (U)
  22. Copper, Rubidium, Molybdenum
  23. Protons
  24. 26

Bad Surveys make Bad ‘Research’

NB The title of this has altered but the permalink remains unchanged so people can still find it.

Printable: fishy research as pdf

Adverts lie. This is not a big surprise. A hint of the truth, of course, makes an advert much more believable. Advertising is about what they don’t say, much more than what is explicitly stated. Now, as much as I can accept this (being allegedly grown up and everything – adult, if not mature) it doesn’t mean I should accept it when they use or abuse science to help them mislead the audience.

A recent post on Ben Goldacre’s Bad Science site – to be exact, a brief entry on the delicious miniblog which appears to demonstrate his uncensored stream of consciousness – caught my attention. Although I’m not on Mumsnet myself I had heard about the site and Ben’s comment suggested that some dubious research had used the brand to get attention. The weblink didn’t work, possibly because once Ben was on to them the company decided to pull the press release, but I found another one [EDIT, which they also pulled – copy now found here, and if you go here you’ll find the text of it in case they get it removed it again]. The ‘research’ was into the taste and health benefits of an Omega-3 (fish-oil) supplement for kids. A little more work found two posts on Mumsnet, one asking for participants and another listing their feedback. Comparing the data (I’m assuming that the feedback posts comprise the total of the data collected) to the press release, a few things caught my eye.

  1. The participants had to already use omega-3 supplements or have tried them in the past; this means any ‘evidence’ collected in the second (health effects) stage is even more worthless than the average survey.
  2. Because earlier survey answers are visible, surely this means that people are more likely to follow previous trends? I remembered reading about experiments showing an extreme case of this by Solomon Asch, a social psychologist in the 1950s .
  3. The comments in the feedback did not, on first glance, seem to be as positive as the press release had suggested.

A little time spent tallying responses confirmed this last impression. The press release claims that 93% of parents had said that the product didn’t taste of fish. Of the 42 responses I found, seven said it did taste at least slightly fishy while 35 said it didn’t. The only way I can get that to be 93% is by taking that as 7 Yes answers out of 100 responses – even though the question was about the kid and/or parent. This is either sloppy or deliberately deceptive.

They ask the question very carefully – they ask if it tastes of fish. By quoting this (slightly mangled) statistic, they can ignore the large number who said it tasted bad (the word ‘vile’ came up more than once). This seems to me to be a good example of a carefully selected proxy outcome (explained nicely in the fantastic How To Read Health News article, found on the NHS Behind The Headlines site).

They also claim that over half of the parents would recommend the product to a friend, while I counted only 14 Yes answers of the 42 who responded – exactly a third.

Well, what else should a science teacher and long term reader of BadScienceBlogs dowhen faced with something like this? Produce a lesson activity and post it on his blog, of course! The printable activity (downloadable as fishy research pdf) has several possible approaches.

The ideal in some ways would be to give your students the tally sheets and weblinks, asking them to total up the answers to each question. There’s a page you can give them access to with the links they need. Alternatively, there’s a page with an extract from the press release, a sample answer and my totals. I’ve only gone through this once myself, so please let me know if my counting is off. I am confident that although I may be off by one or two either way there’s no way the data says what they claim in the press release. The only other possiblity, of course, is that they collected data directly as well as through the forum. Of course that must be it. Silly of me to suspect anything else.

Either way, the last page is a (write-on) worksheet, with questions which will lead them through the ideas I have covered here and a few more. Students will have to compare the data to the press release and comment on possible reasons for the differences. They are invited to consider the phrasing of the questions (it specifies a fishy taste rather than a bad taste) and speculate on how the process could have been rather more rigorous. Finally, they will be asked to consider a brief summary of the evidence for fish oil for ‘average’ children and suggest how the popular ‘brain boosting’ hypothesis could be best tested.

As always, I’d be very grateful for any feedback on the activity. In this case I’d be especially grateful if you can let me know if my arithmetic isn’t what it should be! I know I haven’t especially focused on the evidence, or lack thereof, for the brainboosting effects of fish oil. I figured I’d leave that to the professionals. I’m a teacher – I’ll stick with teaching. If you like this activity, you might like to check out my previous post (and associated scheme plus resources) on homeopathy. I will leave you with one last quote.

“Advertising is about making whole lies out of half truths.”

 

Mavericks?

Kids – come to that, people in general – love to feel that they’re the only one with the right answer. Proving somebody wrong gives us a sense of triumph. I suppose you could argue that as humans have evolved to battle with their wits rather than their fists, winning arguments is just another way we demonstrate our fitness to reproduce. Ignoring such a basic human characteristic will cause us problems, but we must also recognise how poorly this conditioning prepares our students for science.

Scientific disagreements are not settled by ‘winning an argument’ in the conventional way. Science is about accurately describing the real world, and as Philip K Dick wrote, “Reality is that which, when you stop believing in it, doesn’t go away.” It can be difficult to convince a student that settling a question in science is about evidence – and this is only emphasized when you explain that all ideas in science are subject to change if more evidence appears. Pupils who begin to appreciate this are taking huge steps in their understanding of how science works, and are doing so despite a very different view of science held by many adults.

Science as described in the media seems to be about facts. Those who disagree with accepted scientific ideas are described as ‘mavericks’, and simply by challenging widely-held viewpoints many will see them as admirable or interesting. These mavericks, or their advocates in the media, rightly point out that many of today’s accepted theories started out as being ridiculed by the prevailing scientific opinion. (Wegener’s ideas about continental drift, what we now explain by tectonic plates, is a well-known example.) These scientists – and of course their equivalents in other fields – are now remembered as having fought mainstream opinion. But we remember them because they turned out to be right! It’s like the recent headline  about a 2m lottery win; they used a system to get it. We never hear about the millions of other punters who failed despite having a ‘system’. This is an example of confirmation bias. In the same way, we rarely hear about the amateur and professional scientists who held on to their own pet theory, long after the evidence showed they were wrong.
 
With anything scientific, if you have evidence it usually turns out to be pretty hard to ignore the facts. Delays usually happen when a scientist has only part of the puzzle. Wegener, for example, did not have a plausible mechanism to explain continental drift; you could say that he had circumstantial evidence but no motive. The thing is that it’s a lot easier to talk about being oppressed, and the scientific paradigm, and people ignoring evidence, than it is to find convincing evidence which contradicts what everyone else is doing. Occam’s razor isn’t always right, but it is a pretty powerful way to weed out the idiots, fakers and other con-artists.

Adherants of homeopathy love to talk about ‘quantum principles’ and the ‘memory of water‘. Point out the huge theoretical weaknesses of their pet theory, however, or the lack of any good high quality evidence of it working beyond placebo, and they stop being quite so vocal. AIDS denialists (those who are convinced that the set of symptoms we call AIDS isn’t caused by HIV, but instead by lack of vitamin C or, bizarrely, the use of AZT) are, like many other similar monomaniacs, much better at quoting particular studies that support them than at taking in the broad sweep of evidence. The same is seen with climate change ‘skeptics’ – aka denialists – who cherrypick data that suits them, ignoring the rest. The BCA did just this when they finally released their “plethora of evidence” last year, to remarkably swift analysis and mockery from the blogosphere. (I wasn’t there – actively, at least. I was cheering from the sidelines.)

It’s one of the things that can be very frustrating when the media – the BBC is particularly bad on this one – try to present the ‘two sides of the argument’. They give equal air time, and apparently equal weight, to (a) the representative of the NHS who has spent 30 years on vaccination research and (b) some worried mother who ‘just knows’ that her son’s diagnosis of autism must be linked to the MMR jab he had the same year. It makes a better narrative, a story that journalists (and the rest of us) love to hear – brave underdog fights off huge corporation and wins due to being pure of heart. Of course, it’s not always right. But it gives many people, including our students, the impression that scientists can’t make up their mind and we don’t know anything for certain.
 
It’s like a kid telling his Mum, “yes, I got an A in my past paper” and me pointing out that he got an A in 1 paper of 4, and that the past four modules are all C grades. Sometimes an outlier or anomaly tells us something really interesting is going on (if I drop a pencil 1000 times and even once it floats silently instead of falling to the floor, something very weird has happened). The example I often give to students is plotting the level of noise in a car against speed. It’s not a smooth pattern; it will spike at (what we now know are) resonant frequencies, when the parcel shelf or whatever starts rattling. Here the anomalies are interesting, especially to engineers who then try to solve the noise problem before the customers complain. But sometimes that anomaly is because we screwed up the method, or because a kid guessed better than usual on a multiple choice test.
 
The general public doesn’t have the knowledge or skills to assess the quality of the research. It’s worth pointing out that there are some fantastic sites offering just these skills; NHS: Behind the Headlines is one and more specifically their excellent article teaching you how to read articles about health news. Sadly, most people feel that they don’t have the time, patience or ability to do so, despite it taking less time than watching England lose at the football, again. This means it’s very easy to pull the wool over their eyes, or over the eyes of a journalist, and make it seem like the scientific community is split 50/50. MMR vs autism, evolution vs (un)intelligent design, conspiracy vs moon landing, homeopathy vs placebo… Most scientists know where they stand. In most cases there is a clear consensus on the big issues – the disagreements (which are many and frequently bitter) are about details. Important details, yes – but often not something newsworthy. They disagree on the tenth decimal place in a fundamental constant, or on the precise mechanism for how a small but vital part of the immune system fails when HIV infects a human being. Journalists don’t like telling that story, because it’s boring. Man bites dog is more fun.

The idea of a lone maverick, righting the wrongs of an uncaring establishment, is a popular one. It is, on many levels, an appealing one. Sadly, that does not mean it is a correct one. Journalists who perpetuate the idea that science is always about a struggle between individual people, between personalities, do us few favours in the long run. Andrew Wakefield has made the most of the attention he received, meaning that even now he has fans who refuse to hear about his many transgressions. He is far from the only example. In such cases the media often deserve, but rarely receive, a portion of the blame. In the classroom, we must be careful to focus on the ideas, the evidence, rather than who is saying what. And somehow we must do this while still pointing out that some sources are more reliable than others! 

In general, I find my students seem to think I’m pretty sceptical. In fact, most of them think I’m cynical. Now, some of that is from spending my working life surrounded by teenagers. But in general, I’m quite happy to be seen as a ‘skeptic’ – because although I may not be a scientist in practice, I would like to think I am one in spirit. What can be closer to the scientific method than asking for evidence? The more unlikely something is, the more evidence I would need. I explain to my students that I would like three things from them, by the end of their courses. I want them to be curious, asking questions and able to start looking for the answers themselves. I’d like them to be untrusting of authority, always searching for the most likely explanation, not taking the world at first glance.

Oh, I said three things, didn’t I? Well, it would be nice if they passed some exams as well…

Job Descriptions

A break from revision ideas – at least partly because a large part of me suspects that I’m spending more time on it than half my students. Instead I’d like to describe my solution to a perennal problem in science, a lack of meaningful engagement with practical work.

This may seem a surprise. After all, practical work is one of our subject’s selling points, surely? Everyone loves messing around in the lab. And yet time and again it can be so easy for practical lessons to degenerate into chaos. I’ve identified two categories of difficulty, partly guided by the ideas in the IoP Report Girls in Physics.

  • Some students rush in without thinking about the ideas they are investigating. This means details are missed and so data may not be meaningful.
  • Some students prefer not to handle the equipment, lacking confidence that they can apply the instructions, or feeling that their understanding will not allow them to design an experiment that answers the question.

It would be simplistic to suggest that this is purely a gender issue, but I think many colleagues would recognise that boys are much more likely to fit into the first category, while girls are more likely to match the second. In some ways this does not matter, as long as we recognise that students can be overenthusiastic and sloppy, or underenthused and less involved. In the classroom we respond to individual students to address their issues, rather than dealing with them as averages.

My first attempt to solve this issue was to require my students to work in mixed groups. Rather than specifying them myself, I allowed them to choose by themselves, only intervening when they could not manage to form groups composed of both boys and girls. In retrospect, perhaps I should have been able to predict the result. A couple of lessons later, I looked across my classroom, groups all working well… then realised that at every single experiment, the boys were (constructively) messing around with the equipment and the girls were sitting back with their folders open (to record results).

Hmmm.

Scene: same class, a week later. I’d have managed it sooner but getting the chains took a few days. (Not as bad as it sounds.) As the students discuss the starter, I walk around giving them badges. Each badge tells them what job they will be doing and is a different colour. They are told to put the chains around their necks and keep them on all lesson – the key words will remind them what their responsibilities are. They are then asked to get into groups of four, including one of each job. Only then do a few of the brighter students realise that they’ve been stitched up. The jobs of ‘equipment set-up’ and ‘measurements’ have mostly been given to girls, while most boys have the ‘quality control’ or ‘scribe’ roles to fulfil. They must work together, but this forces (‘encourages’ in my longer description for colleagues) the boys to sit back and think while the girls need to engage with the more ‘hands-on’ aspects of practical work.

I don’t use the badges for every lesson or every practical, but they are surprisingly popular with the students. They work best with longer, more investigative-style practicals. There are some issues, primarily with boys who sulk at not being allowed to touch the equipment (in some lessons – it’s obviously important to rotate the roles according to some kind of pattern). Some girls struggle to take the initiative, to solve problems with the equipment, but this is precisely why they need to do it! On the whole I can see several good points with this (or a similar) system.

  • As a teacher you can require those who normally take over practical work to take a back seat.
  • Girls can take their time with figuring out how to work an investigation without someone more confident over-ruling them.
  • Those who do understand need to explain their ideas to classmates clearly and concisely.
  • As you rotate roles, there is the opportunity to address some of the areas of APP.
  • The job of ‘quality control’ includes appropriate use of investigative key terms (accurate, reliable etc) which means it actually gets addressed!
  • Investigative work (after a few teething problems) goes more smoothly as students take responsibility for ‘their job’.

 Even my older students – I first tried it with Year 8 – seem to get a lot out of it. I’ve deliberately left my Year 10 kids out of the trials so far, as they are going to be guinea pigs for something more detailed. We’ll see how they cope with them in a few weeks time.

I’d appreciate any comments, especially if you’ve tried something similar or have used the printable badges and descriptions (below) with your own classes. Like all the resources on my blog, all I ask for is some feedback so I can improve them.

printable: job descriptions as pdf

Teaching Evolution 6/5: Skeletons in the Family Tree

 I’ve decided to add a quick post which fits in nicely with the set of five I made the other week. Basically, a bunch of interesting things showed up in science news online, more or less simultaneously, and I thought it was worth adding a new post instead of amending an old one.

One bit of news is that there is some evidence to suggest that humans bred with Neanderthals. This was reported in New Scientist, and the accompanying editorial was pretty good too. An interesting aspect is that Neanderthal DNA shows up in all human populations not descended from ancestral Africans. This nicely illustrates the problems with the whole concept of a species as a distinct, separate group of individuals. Things are a little more complicated than that.

The SciencePunk website puts the human family tree in perspective by linking to some work estimating just how closely related we are to other modern species. Describing chimpanzees, gorillas and so on as cousins is a helpful shorthand, but this article makes the relationship a little more specific. It links to the Tree of Life website, which although not recently updated shows the wider genetic connections between diverse species. The page on us (Homo, naturally) includes links both popular and academic.

Not so much our family tree (in an immediate sense), but still something that students may be interested in. On Not Exactly Rocket Science, Ed Yong’s excellent science interpretation blog, a paper was referenced which gives more evidence that feathers were first used for warmth, not flying. A study has shown that the bones were probably not strong enough to support powered flight. Please note, I’ve carefully stated this as ‘used for’ not ‘evolved for’ as that is just asking for trouble with determinism…