All too often students responding to feedback is ineffective. It is worth considering: is my feedback working?
At the end of a topic my class will typically do some form of short test. As you mark a test paper you quickly come to realise which questions students are performing well on and which ones they are not: both giving an overall trend and individual trends. If there is a question that many students have struggled with there are a few options as to what might have caused this:
- My teaching of that area
- Difficulty in accessing the question (e.g. it is a poorly written question or they are not well-prepared to tackle the style of question they’ve been given)
- The concept is simply a tricky one and the question was hard as it required them to apply their understanding in a complex way
Individually students will struggle with different things and this could be for any of the reasons above, and perhaps includes further reasons such as the way the student has studied (which can be influenced by oh so many factors). It is our job to ensure we understand why students have not achieved more highly and to support them in their development of these areas. It is our students’ job to reflect on their achievement, to take responsibility and to do their best to get even better.
What feedback is worth giving?
I guess by “is my feedback working?” in the title of the post I mean, “do the students learn from their mistakes?” or “do the students do better next time?” I feel it is important that I help my students develop an understanding of where they have gone wrong, and why, and how they could study to improve that. But that’s hard. Really hard. Whilst I am about to run through a few of my ideas, I am not saying that they work and I’ve performed a school-wide detailed research study to show it – I want you to decide whether these things are worth trying and whether you think they could be effective for you. These ideas do apply across subjects but some would need tweaking, so apologies for the sciencey slant.
*Note: everything here uses mail merge from my Excel markbook, so the time spent on each is very reasonable.
I tend to do the following:
I always do question-by-question analysis as it helps make it clear to me how the group has performed overall and how individuals have done, but I also allow them to see their analysis and the class average. The class average, in my view, helps contextualise the test as they are not all magically of the same difficulty, and so achieving 80% in two tests in a row might be in line with the average for one test and well above/below in another.
Does it help students learn from their mistakes? Anecdotally, they seem to respond well to knowing whether the test was ‘easy’ or ‘hard’ and many suggest they like knowing how they have done compared to averages in specific questions as they can identify what they have slipped up on that others haven’t (and for those that have done well, they can celebrate their successes). That does help some students identify areas they need to work on compared to the group and gives an individual some areas for development. I wonder whether the positive impact outweighs any negative impact it might have on those who have performed poorly across multiple questions?
For my GCSE groups I have a collection of comments for each question and I Mail Merge these onto a sheet. Essentially, if someone slips up on a question then it will come up with a comment for that question, and if they have several questions that they need to work on then they will have multiple ‘individual’ comments. Certainly they feel as though I have put in a lot of effort to give them individual feedback (even if I tell them it is just my mail merge from a spreadsheet), and it gives them plenty of individual tips and advice on what to do to improve. Generally these are things that they can take on board ‘for next time’.
As with everything in this post, I have no real measure of their effectiveness in helping students make improvements. Do they actually use these comments? Does the positive impact of me showing that I care have a significant impact on how motivated they are in my subject? Perhaps they won’t look again at the specific test comments but they will revise more for future tests?
Some students have trends in where they got questions wrong – for example, they may have not stated a correct equation multiple times, they may have misread the command word in questions over and over again, or maybe they repeatedly failed to write enough for the marking points available. Sometimes I use a Response Analysis approach where for each of the questions, or a chosen selection of difficult questions, students have to read the question and their response and decide on a reason as to why they got the question wrong. By doing this, they may identify trends in what they have struggled with and it could support them in their preparation for future tests.
Does this work? Will students take on board these trends? You would need to spend time getting students to use this information to set their own targets and to review these before your next test. This way they are reminded of what they can work on to better prepare themselves, and then you must review against post-test to determine whether there has been any improvement.
This makes a lot of sense for those questions where the whole class has struggled. You can review the content with the whole group, ensure they have an understanding of it, then get them to re-do the question or to tackle a similar example. It would be crazy not to do this before you move on to another topic. I tend to get them to tackle a similar question as it avoids a learned response being repeated; doing something different encourages them to think harder and demonstrate their understanding rather than simply use their memory skills.
Does it work? How do you make feedback stick?
The ideal scenario is that you give students some feedback, they take it on board and they refer to it when studying in the future and review how they perform in future assessments considering these previous areas that needed development from feedback. This generally does not happen magically, and particularly so in the younger years.
The real question is: how do I ensure students do this? It’s not about this ‘reacting to feedback’ thing that people say. A lot of my students react to my feedback because I tell them they have to respond to it. This doesn’t necessarily make them learn more (though I hope my feedback is written in such a way as to expect them to), and it absolutely doesn’t turn them into reflective learners who will regularly consider my feedback in the future in order to make long-term improvements in the work that they do. This comes from giving the students high quality feedback, students setting relevant targets that are written in such a way as to allow them to apply this logic to future assessments, and then to regularly refer to these in lessons, giving time to review these before an assessment and then reflecting on them once again after the assessment. If the feedback and targets don’t allow that, then apart from a few of our students…you’ve really wasted your time.
My latest efforts
Something I have tried with my Year 10s, who I have given a lot of feedback to from tests and homework, is to summarise all of the comments and assessment results together into one double-sided sheet and a space to write in targets. They’ll keep this and we will refer to it regularly so to develop the skills and knowledge that they need, based on their own targets set following my feedback. It’ll be interesting to see if it actually improves their performance in their chosen areas.
This is nothing new, but I think it is important to be careful of thinking that students responding to feedback actually means they’re doing something useful. Let me know your thoughts.