You all know the score. A train leaves one city travelling at 35 miles per hour and another races toward it at 25 miles an hour from a city 60 miles away. How long do they take to meet in the middle? Leaving aside the actual answer of 4 hours (factoring in signalling problems, leaves on the line and a pile-up outside Clapham Junction), these sorts of real-world scenarios are often used as teaching tools to make dreary maths “come alive” in the classroom.
Except they don’t really work. A new study shows that far from easily grasping mathematical concepts, students who are fed a diet of real-world problems fail to apply their knowledge to new situations. Instead, and against all expectations, they were much more likely to transfer their skills if they were taught with abstract rules and symbols.
The use of concrete, real-world examples is a deeply ingrained part of the maths classroom. Its advantages have never really been tested properly, for they appear to be straightforward. Maths is difficult because it is a largely abstract field and is both difficult to learn and to apply in new situations. The solution seems obvious: present students with many familiar examples that illustrate the concepts in question and they can make connections between their existing knowledge and the more difficult concepts they are trying to pick up.
The train problem is a classic example. Another is the teaching of probability with rolls of a die, or by asking people to pick red marbles from a bag containing both blue and red ones. The idea is that, armed with these examples, students will recognise similar problems and apply what they have learned. It’s a technique deeply rooted in common sense, which is probably as good an indicator as any that it might be totally wrong.
Jugs, pizza, balls and symbols
Jennifer Kaminski from Ohio State University demonstrated this by recruiting 80 undergraduate students and teaching them about a simple mathematical concept that involved adding three separate elements together. The concept included very basic mathematical ideas, like the concept of zero, or the idea of commutativity – that the order in which things are added doesn’t change the result (1+2 and 2+1 both equal 3).
Three groups were taught using familiar, concrete examples. The first group were told to imagine measuring cups containing varying levels of liquid and asked to work out how much liquid remained when the two were combined. So for example, combining a jug that was a third full with another that was two-thirds full would give a full jug. Uniting two jugs both two-thirds full would give one jug that was a third full as a remainder.
The second group was taught using another similar example involving pizza slices alongside the jugs, and the third group learned both of these, along with a third system involving tennis balls. They were told that each new system worked in the same ways as the old ones, and obeyed similar rules.
The fourth group was taught in a more generic way. The vivid jugs, pizzas and balls were replaced with generic using a meaningless and arbitrary symbols, and the students simply had to learn how they could be combined. For example, a circle and a diamond combined to form a wavy rectangle, while the sum of two circles was a diamond.
After the training, feedback and 24-question multiple-choice tests, Kaminski was satisfied that the vast majority of students had picked up the principles successfully. She then asked to students to apply their knowledge to a fresh setting, described as a foreign children’s game involving three objects. Children pointed to two of the objects and one child, who was “it”, had to point to the correct final one to win.
The students were told that the game’s rules were very much like those of the systems they had just learned. They were shown some examples so that they could deduce these rules and were tested with 24 multiple-choice questions. These were, in fact, the same as the questions they had previously answered, but “translated” to the new setting.
Generic beats concrete
Contrary to all expectations, the group who were taught with generic symbols fared best, answering 76% of the questions correctly. They completely outperformed the three groups who were taught with real-world examples, who all scored between 44-51%, no better than a chance result. The group that was taught using three concrete examples fared just as poorly as than the one which only learned one system (see Experiment 1 below).
So the common sense idea that students are capable of picking out the similar threads from related examples seems to fall short. In fact, quite the opposite was true – they were better able to apply their knowledge to a new setting if they were taught using generic abstract examples.
The result seems so at odds with the traditional view of education that Kaminski tested it further. She recruited another 20 volunteers and taught them the jug and pizza systems, but this time, she explicitly spelled out the similarities between the two. Astonishingly, this didn’t help matters and the students’ scores still reflected random guesswork more than learned problem-solving.
In a third experiment, the students themselves were asked to work out the similarities between the two systems themselves. This time, nine of the 20 students scored very high marks of 95%, but the others still performed no better than chance. So this type of teaching method benefits some high-performing students achieve a top grade, but it also fails to help others. And on average, the ‘class’ still scored less than the group who learned the generic symbols.
In a final experiment, Kaminski wanted to see if a combination of real-world and generic examples would have a stronger effect than either alone. Clearly, they both have their advantages – generic examples seem easier to apply, while real-world ones are easier to pick up at the start. Even so, students who only learned the abstract symbol system still outperformed a second group who learned the jugs method, followed by the symbols (see Experiment 4 in the graph above).
Kaminski’s work will no doubt come as a shock to those in the education sector. While it’s certainly true that students engage with mathematical concepts more easily when faced with real-world examples, these striking experiments suggest that they aren’t actually picking up any real insights about the underlying principles involved. And without those insights, they are unable to apply their knowledge from one real-world example to another, exactly the opposite of what maths teachers want to achieve!
Kaminski isn’t calling for an end to all real-world examples in classrooms, but she suggests that they should only be used when the basic abstract principles have been introduced. Deeply grounding an abstract concept in a real-world example could actually do more harm than good, by constraining the knowledge that students gain and hindering their ability to recognise the same concept elsewhere. Questions about bags of marbles and speeds of trains really are just about bags of marbles and speeds of trains.
Update: Chad has a considered analysis of these results over at Uncertain Principles. Have a look.
Reference: Kaminski, J.A., Sloutsky, V.M., Heckler, A.F. (2008). LEARNING THEORY: The Advantage of Abstract Examples in Learning Math. Science, 320(5875), 454-455. DOI: 10.1126/science.1154659
Images: Train photo by Nachoman-au; drawing and graph from Science.