I’m enjoying How We Learn for tying together quite of bit of what I learned during my year in grad school. The effects of spacing (chapter 4), testing (chapter 5), and interleaving (chapter 8, covered earlier) are powerful for learning, but we know a reasonable way to implement all of them: throw everything you want to learn into a spaced repetition system. What’s been most exciting is chapter 9, Learning Without Thinking, which covers perceptual learning.
School education is skewed to verbal and symbolic learning: tests require you to explain your answer or work out steps of math. Perceptional learning changes the focus to visual information. I’ve covered perceptual learning previously in the rather obscure realms of Stepmania and chick sexing, but it applies to almost anything. To see how powerful perception as a component of domain expertise, consider chess. Quoting Carey:
On a good day, a chess grand master can defeat the world’s most advanced supercomputer, and this is no small thing. Every second, the computer can consider more than 200 million possible moves, and draw on a vast array of strategies developed by leading scientists and players. By contrast, a human player–even a grand master–considers about four move sequences per turn in any depth, playing out the likely series of parries and countermoves to follow. That’s four per turn, not per second. Depending on the amount of time allotted for each turn, the computer might search one billion more possibilities than its human opponents. And still, the grand master often wins. How?
He quotes a sketch of an answer from Chase and Simon’s 1973 study of perception in chess, “The superior performance of stronger players derives from the ability of those players to encode the position into larger perceptual chunks, each consisting of a familiar configuration of pieces.”
What does that mean? We don’t have a verbal or symbolic understanding of this ability, eluding the primary mode of computers, education, and–unfortunate for me–blog posts. We see the visual information of the board, and it activates different sizes of “chunks” in our mind. These chunks perhaps roughly correspond to levels of abstraction. A small chunk is that there is a black pawn on g4. A little larger is seeing the king in check. A big, powerful, supercomputer-beating chunk is some kind of dominant offensive pattern that is observed by white’s combination of positions across the board.
…And how do we learn these chunks–in a way that hasn’t translated to the performance and algorithmic sophistication of computer systems? I think we’re still in the early stages of understanding that, but the next stop on my reading list is papers from the Human Perception Lab.