Design a site like this with
Get started

Making a MOCKery of Feedback

Photo by Monstera on

Like many teachers I’ve recently spent a lot of time marking mock papers. An important question to consider is what you’re going to do once you’ve marked them. Until a couple of years ago I had a really clear answer to that, a process I’d spent a while developing that I thought worked really well, and was pretty proud of. It looked something like this:

  • I’d start with a few very general points, overall performance and maybe one or two key things they all needed to work on.
  • I’d give out mark schemes (these were written in examiner-style but adapted to be slightly more student-friendly where possible)
  • Papers would be handed back. During marking I would write the mark for each individual question on their papers but no overall score, % or grade
  • Students would look through their papers and transfer the mark for each question into the appropriate box on the mark scheme. They could then add up the score for themselves and work out their overall % (I would already have this recorded elsewhere)
  • Each score would then be ‘RAGged’ – highlighting in different colours to indicate high/medium/low performance on each question
  • There was a ‘WIN’ feedback section for them to fill in, reflecting on what went Well, what needed Improvement and Now I need to… (this is part of our school’s marking policy)
  • Students would pick specific questions to work on making improvements, based on their RAG analysis
  • Towards the end of the lesson, once they’d all had time to ask questions, query marks and make some improvements, I’d tell them the grade boundaries.

I genuinely believed that this was a really effective approach. I felt that it forced students to really forensically analyse their performance, identify strengths and weaknesses, and prioritise their areas to work on. They got loads of detailed feedback and they all got a chance to improve some answers. Also, if someone looked in their folders it looked like some pretty awesome AFL had been going on. I demonstrated and rolled out in my dept, was asked to showcase it at a HoDs meeting as an example of best practice, and even had a lesson observation by another HoD and the Headteacher to see it in action.

But more recently I started to have doubts. I think it was a really well-intentioned approach but was starting to see some problems. Despite students confidently saying things like “this helps me to see where I need to improve” they weren’t actually really making significant improvements as a result, at least not beyond the specific questions they were redoing in that lesson. As Dylan Wiliam notes, feedback should improve the student, not the work. There are some further issues I’ve realised with this approach:

Domain sampling

Essentially any single assessment can only assess a sample of the domain that you want students to know, and there’s no guarantee that future assessments will sample the same set of knowledge, rather than other areas of the domain (explained in much more detail here). So students might improve their answers to a couple of specific questions, but those topics may not be assessed in the real exams, and so those improvements haven’t necessarily helped prepare them for it.

Feedback overload 

Going through an entire paper in a lesson means having to deal with far too much feedback to be able to process. You could stretch it out across multiple lessons, but do we have the time? Students need to be able to focus on one or two priorities at a time, securing those before worrying about something else. All too often, teachers, students and parents fail to understand that less is more.

Misguided priorities

Under this approach, students would be most likely to work on improving those ‘red’ questions where they’d scored the lowest. The logic is clear, work on those areas where there is the biggest gain to be made. Except this only makes sense if you’re just seeking to improve your score on this particular assessment. Related to domain sampling above, we want students to work on things that have the most impact on their ability to answer a wide range of questions. For example, on a recent paper many of my students struggled with a question about operationalisation of variables. It was only a 2 mark question and a few got one mark because they said something close enough to what was on the mark scheme even though it was clear they didn’t really understand it. No students would target that as a priority to work on during the feedback and might instead focus on a higher tariff question. But operationalisation is a really important concept which underpins a lot of research methods in Psychology. Understanding this concept properly is likely to have a much bigger impact on a range of other related ideas, and will fundamentally improve their understanding of methods in general. So focusing on that is much more beneficial than, for example, an 8 mark question on which they scored 0 because they got the names of two theories muddled up and wrote about the wrong one. It takes teacher expertise to see and know this, and leaving it up to students will likely leave them working on the wrong things.

At our school, Year 13 students have a mock ‘results day’ – they get all their results together in a brown envelope on a grade sheet which also lists their university offers. While there are a number of points that could be debated about this, I think on balance it’s a worthwhile process. But a consistent complaint that comes up from some students and staff every year is that the moratorium on results feedback up to the day means that teachers cannot give timely feedback and students are left ‘treading water’ not knowing how they’ve done or what to improve. I think this has revealed an interesting and deeply-held belief that feedback has to be coupled with results and ‘going through the paper’. Teachers feel that because they can’t release results, students can’t get useful feedback and so can’t start addressing problems. With the summer exams looming on the horizon it’s a fair concern that you want to address issues as soon as possible. However, you don’t need to share results to do that, or even have the papers to hand back. And I think feedback without papers might actually be a more effective approach in many ways.

This year I started giving feedback pretty much as soon as I started marking. Since I mark by question rather than by individual paper I get a really good feel for any issues across the cohort. I didn’t need to know whether a student had scored X % across the whole paper, just that everyone needed to improve on something. When I saw a class I could spend a few minutes addressing something that had come up from my marking without needing them to have their papers in front of them. “I noticed on the mock that a number of you struggled with… so let’s have a look at that together.” I could explain the concept they’re struggling with, give concrete examples and non-examples of good answers, model the thinking process I want them to follow, then give them some independent practice. What are the advantages of this approach?

Reduced overload. Drip-feeding feedback over a number of lessons focusing on just one or two things at a time rather than giving it all at once.

No opt-out. Students can’t actually remember what they wrote or how good their answer was. No-one can just switch off because they can see they got high marks, they all need to pay attention because it *might* apply to them. And for those who did do well on something, a bit more practice to reinforce is never a bad thing. Everyone benefits.

No haggling. Students are only focusing on the knowledge/skills I want them to develop. There’s no time wasted comparing scores with their peers, arguing over whether something should have got an extra mark or having to explain repeatedly to various students why they didn’t get a mark.

Focus on mastery, not performance. Students are receiving no performance feedback – they don’t know their score or grade (and know they’re not getting it at the end of the lesson). They aren’t being distracted by this because the focus is solely on the knowledge/skill and they haven’t got their paper in front of them. This takes the emotion out of the process too.

Improving the student, not the work. By talking about the issues in the absence of the paper, I’m improving their general knowledge and understanding. When they start to do some independent practice, they are developing their ability to answer questions in different contexts, not just improving the specific answer from the paper.

Of course, I’m not saying that, at some point, students shouldn’t get to interrogate their paper and see where they did or didn’t score marks. This will help identify specific, individual priorities in terms of filling gaps in knowledge, knowing how to answer certain types of question or issues with exam technique. But having already received some of that feedback in advance softens the blow, and gives them the knowledge and satisfaction that they’ve already started addressing those priorities. I also acknowledge that there will be some subject-specific variation here – it may well be that going through the paper question by question is much more useful in, say, Maths, than it is in Psychology (although I’d love to know why that might be). 

However, I think the general principles of the approach I’ve outlined here make it more likely that the real change that happens as a result of feedback is not just on their exam paper in purple pen, or on a multicoloured mark scheme or feedback sheet. These things will soon get left in a folder, on their desk or at the bottom of their bag. It’s the change inside the students minds that they will take with them into the future.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: