Is Practice Dead?

According to a new study, “Deliberate practice is unquestionably important, but not nearly as important as proponents of the view have claimed.” Broken down by domain in a meta-analysis of previous research, deliberate practice explains only 26% (games), 21% (music), 18% (sports), 4% (education), or a minuscule <1% (professions) of differences in performance. The aim of this research isn’t to provide advice, but if you start to believe that practice isn’t that important or effective, you might not pursue it wholeheartedly. I’d like to argue that that’s a big mistake.

Let’s start with the “10,000 hour rule” that is always cited in articles about practice and performance. The standard view of this rule seems to conflate two useful ideas. The first idea is that expert-level performance in cognitive domains takes a great deal of cognitive work–we’ll see why. Call this the practice threshold hypothesis. The second idea is that the specific techniques used to practice make a big difference. Call this the practice quality hypothesis. The meta-analysis is conducted on studies that use the original definition of deliberate practice from Ericsson, Krampe, and Tesch-Römer, 1993, “effortful activities designed to optimize improvement.” Their definition captures neither key ideas about the cognitive work threshold or quality in practice.

The origin of 10,000 hours dates back at least to Simon & Barenfeld, 1969, where they discuss not hours but the size of a “vocabulary of familiar subpatterns” needed by chess masters and Japanese readers: 10,000 to 100,000. Just like reading in a foreign language won’t make sense if you don’t know key words (this is the best example I can find), it isn’t simply that “more practice is better” but that a large minimum threshold of practice is necessary for mastery. Obviously this amount is not exactly 10,000 hours. Chess can cover effectively endless board positions, so the figure is not an upper limit, it’s just that few people reach another major threshold beyond 10 years of practicing 20 hours per week, and those who do may be beyond the comprehension of mere masters. Or as Professor Lambeau says in Good Will Hunting, “It’s just a handful of people in the world who can tell the difference between you and me.”

To discredit the practice threshold hypothesis the meta-analysis would need to examine total accumulated practice that may be related to the domain. In fact there seems to be an inverse correlation between the variance explained per domain and the difficulty of measuring accumulated practice. Chess masters tend to have studied chess their entire lives, and musicians have played music (of some form) their entire lives. Sport skill can come from a bit wider range of physical training. Education and professions draw on a yet wider range of skills. A mathematician may make a “natural” programmer because of extensive experience with analytical thinking, but his math expertise doesn’t get counted as “practicing programming”.

Now let’s talk about practice quality. There isn’t a dominant theory of exactly what makes practice good (and there never will be as it is domain-specific), so that makes it difficult to examine in even a single study, much less across many studies and domains. As far as I can tell, quality of practice is not considered whatsoever. So there are potentially people showing up half-heartedly to practice, practicing something they’ve already mastered, or practicing something they aren’t ready for all getting counted the same as people who practice “optimally”, whatever that is.

Again we see that in the domains with a low variance explained by practice, practice quality is much harder to measure. In games and music a good way to practice is simply to play the game or play the music (though there are often better). Compare that to professional programming. Few people really practice once they learn the language. The quality of continued learning on the job depend on a huge number of factors. Most likely these could not be accounted for in anything but an ethnographic study (unfortunately I couldn’t track down the one study from the meta-analysis targeting professional programming).

In short this study does not tell us about the potential of practice because its measure doesn’t capture when practice is most useful. Unfortunately due to the domain dependencies of what constitutes practice threshold and quality, we’re unlikely to ever see a meta-analysis that captures the full potential of practice across domains. What it may tell us is that the common idea of practice isn’t nearly good enough, especially in something as important as professional work. If it only makes 1% difference, you aren’t doing it right.

There are many sources for ideas for better practice. Popular science works such as Moonwalking With Einstein, Practice Perfect, and The Little Book of Talent are all good places to start. The Cambridge Handbook of Expertise and Expert Performance is a collection of articles across a variety of domains showing the progress that has been made since the 1993 definition of deliberate practice.

Finally a small pitch of my own: I’m reviving my wiki to compile general thoughts on effective learning and practice as well as a glimpse of my personal efforts to practice programming and other skills. I encourage you not only check mine out but also to start something similar, and maybe we can conduct a study of super-effective learners!

  • http://christiantietze.de/ Christian Tietze

    Thanks for opening up the wiki! I am just diving into your posts but I’m already sold and subscribed to your blog. Keep it coming!