Feedback in Digital Educational Math Games
There is also something to be said about the perception of the recipient of feedback and whether they perceive improving to be worth the effort or something they are capable of.
In a large majority of digital math games, the feedback within the game is not very effective according to what we know about effective feedback from research in mathematics education. What is feedback in the first place? In education, feedback is a critique, analysis or evaluation of a student’s work or performance, with the goal of increasing performance and improving outcomes. Here, formative assessment expert Dylan Wiliam sheds light on some of most important aspects of good feedback. Namely, with effective feedback students can (a) use it to improve or get better at the task at hand, and (b) ultimately self-reflect and develop a keen eye to self-assess their own work.
“There is still a difference in being told by the game that you need a simplified solution and that the level can be solved in less moves, and being told specifically how one might achieve this standard for that specific level.”
An issue with good feedback is that it is difficult to produce because there is no one-size approach. Although there are a few general guidelines, the conditions for effective feedback can vary (i.e. depending on the student, task, etc). Generally, there needs to be some description or suggestion for what to do to improve. Say for example, you gave your students a 10-question multiple-choice math quiz. Marking their correct and incorrect responses along with a letter grade (something we see quite often in education) tells students how good or bad they did, and is important, as it tells one how far they are from the desired goal, but it doesn’t tell students what they should do next or how to improve. Delaying feedback, or the timing of giving feedback needs to be taken into consideration as well. For example, feedback is useless when there is no time for students to respond to it or they do not yet have the ability to make the improvements or suggested changes. There is also something to be said about the perception of the recipient of feedback and whether they perceive improving to be worth the effort or something they are even capable of. Although anecdotal, I’ve seen numerous times where students simply don’t think improving is worth the effort, and it usually either because they feel they are already proficient enough at the task at hand, or they don’t think they are capable of reaching the demands of the task.
Clearly, there are a variety of elements that go into the approach of giving feedback. What’s worse? If not done diligently, feedback has shown to actually decrease student performance (Hattie & Timperley, 2007). In the scenario I presented earlier with the multiple-choice quiz, students won’t know how to improve, so it is unlikely that any sort of performance increase would occur. It’s also known that providing students with both feedback about the task along with a grade or praise (something else we see often in education), tends to result in being worse than just providing feedback about the task. That is, the grade is what’s focused on and the comment goes ignored.
So what does feedback look like in digital math games? Despite what we know about feedback from research, digital math games still lag behind and have a long way to go. Before I start, it’s important to note that while we have a large amount of knowledge of what effective feedback is in school classrooms, I acknowledge that it doesn’t necessarily mean the same types of feedback would be effective or useful in a game. However, they can serve as good hypotheses to be tested. For starters, most feedback is given after the performance and I liken it to a grade. After completing a level, there is some indication of how well the player has performed. This often comes in the form of stars or a score based on performance. Some of the better (more engaging; better at imparting conceptual knowledge or building number sense) titles such as Wuzzit Trouble (BrainQuake Inc) or DragonBox (WeWantToKnow), go a step beyond and give stars based on specific features of your performance. For example, in DragonBox Algebra, players can earn a maximum of 3 stars for completing each level:
1 star - Completed the level without a simplified solution and went over the suggested number of moves
2 stars - Completed the level with a simplified solution but went over the suggested number of moves
3 stars - Completed the level with a simplified solution in less than or equal to the level’s suggested number of moves
So while the feedback given is in still stars, in this game, the player at least knows what they have to do generally to see improvement. However, there is still a difference in being told by the game that you need a simplified solution and that the level can be solved in less moves, and being told specifically how one might achieve those standards for that level.
In a study by Yang, Chu, & Chiang (2018) 2nd grade students played “Kingdom of Addition & Subtraction”. In this game, when students answered a question incorrectly, the game provided them with immediate feedback. For example, the game asks students “A pencil is $25, an eraser is $10 more than a pencil, and a ruler is $13 more than an eraser. How much is a ruler?”. If students provide an incorrect answer, the first tier of feedback is given in the form of a prompt/text-description (i.e. “a pencil + 10 = an eraser, an eraser + 13 = a ruler”). Students in the experimental group, however, were given both this text-based description and a 2nd tier of feedback. If students in the experimental group provided a wrong answer twice, they were given an image description of the problem (a pictorial image to represent the problem) and the words “more” in the problem changed to “higher”. Unsurprisingly, students in this group outperformed students who only received the text-description prompt feedback. The big differences in the approach here are:
Timing - feedback was given during performance/while students were engaged. A focus was on process. Mentioned earlier, feedback given after performance breeds the risk of whether or not students find it worth it to go back and improve. (Gresalfi & Barnes, 2016) ran into this dilemma as well, as students didn’t bother to use feedback to make revisions in year 1 of the study. Students in year 2 of the study (who performed worse at the pre-test) did better than year 1 students on the post-test as they received feedback on their conjectures and initial thoughts as opposed to their worked out answers.
Type - The feedback given was more specific in how students might need to look at that particular problem in order to meet the desired goal/performance (i..e the game telling the player by “more” we mean “higher” and a visual aid to go along with it), and therefore I feel, is better than DragonBox Algebra or Wuzzit Trouble’s less specific approach (i.e. solve the level in fewer moves, but no specific guidance on how a player should go about doing that, for any its levels).
So what do we make of all this? There isn’t a one-size-fits-all approach to feedback. It’s heavily dependent on the goal, the recipient, and the relationship between the recipient and the one giving the feedback. Digital math games often give feedback in the form of stars and points, but don’t tend to offer specific feedback for a student may improve their performance on a given level, and do not provide any real incentives (compared to non-educational games at least — rewards, items, quests, etc) for students to return to levels and complete them better than their first attempt anyway. My own personal suggestions:
Provide an incentive for players to want to improve. Non-educational games provides players with additional or exclusive story scenes, items, and rewards that make players want to invest in improvement. These are more enticing than virtual stars — even to kids.
Provide more targeted/specific feedback for how to improve on a given level. Saying to a student “hey, why don’t you look at the problem this way?” is much more specific than telling them “try solving this problem in X moves next time”. The first puts one on a path to get better while the latter doesn’t tell someone how they might even begin.