Dear teachers: Thanks for what you do. But I have a message: Khan Academy videos are not for you. The videos are for students, and students are using them. So I think MTT2k is misguided. We should be sitting the students in front of the videos and trying to figure out what goes on in their head, rather than sitting the teachers in front of them.
Here’s why teachers won’t get it right: Expert blind spot refers to the idea that “content knowledge eclipses pedagogical content knowledge” (Nathan, Koedinger, & Alibali, 2001). EBS does not mean that teachers don’t have enough pedagogical content knowledge. It doesn’t mean that teachers (or researchers!) who know about EBS are suddenly able to think like a student. It means that when people think, they necessarily think using their content knowledge. Teachers cannot think like a student who does not have that content knowledge. Imagine a champion weightlifter just trying to imagine–with some degree of accuracy–how his barbell feels to a puny first-timer.
The problem with MTT2k is that the teachers are trying anyway to imagine what a student is thinking when they watch one of Khan’s videos. Because teachers aren’t busy learning the material, they have all kinds of attention to direct at any detail that pops out without a complete, polished explanation. In the real world, we never have that luxury; we have to assemble our knowledge from incomplete, messy fragments.
The original–and still the best–Khan critique is Derek Muller’s commentary that Khan Academy does not address misconceptions in its videos. He compared a straightforward video introducing physics concepts to one that first introduced common misconceptions and then cleared them up by presenting the correct concepts. Although students found the first video clear and concise, they didn’t actually pay and attention and learn from it. (Wait, is that the same “clear and concise” that MTT2k producers are asking for?)
My point is not that teachers are wrong and Khan Academy is wrong and Derek Muller is right (just because we share a surname). My point is that we have to look at empirical data to determine what instructional styles actually work or do not work. So let’s answer some questions, shall we?
Can students learn from videos or even lectures in general? YES, with two caveats. The debate over direct instruction and discovery learning is long and brutal, but there are clear data that direct instruction can be effective, and Derek’s technique is one example of improving video instruction to overcome one thing that direct instruction opponents believe can’t be done with video.
Now, the caveats: one is that the student needs to be active and constructive when they are watching the video. The fact that they freely pull up a Khan Academy video is a good start. There’s no way to make sure this is happening 100% of the time, just like it won’t happen 100% of the time in the classroom.
The other is that students may be overestimating their confidence with the material. In fact, I believe this is one of the major problems with Khan Academy’s videos. Khan, diligently working through every term expansion and long division, is just so good at making us watchers feel like we’re the ones doing the practice.
Should students start with concepts or procedures? Many students are educated without developing the kind of mathematical thinking that we mathematical thinkers would like them to have. This problem has often been attributed to a overemphasis on procedural learning in the classroom. But is the idea of starting with the concepts a form of expert blind spot? It’s a complex issue, and seems most likely that we need both to learn, depending on the exact topic. Sal Khan is clearly interested in expanding Khan Academy’s conceptual video repertoire.
Should videos address student’s misconceptions? Sometimes. Derek Muller provides several compelling experiments where addressing misconceptions clearly improved performance over a straightforward. But all of Derek’s examples are areas where students typically have strong misconceptions that override their learning (kind of a fake expert blind spot). But sometimes students are really just learning something new and there are no real misconceptions to address. Sometimes they have even deeper problems that require a different approach.
My suggestion to begin approaching some of the problems raised above is to forget the flipped classroom, let’s flip Khan Academy. Let the practice be the guide. Students start with the practice and use it to figure out their weakness. Often, all a student needs is a flag on their error to be able to figure out the problem. But not always. And then you can bring in the videos–in particular, the video that addresses exactly the incorrect or missing knowledge of the student. It is difficult but not impossible to assess the deep conceptual knowledge that we’d ultimately like to provide students. And then Khan Academy can use real student data–not teacher’s rear view mirrors–to figure out which videos are not getting the point across.
 The same Derek Muller has an excellent interview with direct instruction champion John Sweller. Klahr & Nigam, 2004 is one experiment where the direct instruction conditions outperforms a discovery learning. A stronger statement is provided by Mayer, 2004.
 Students, particularly low-ability students, are poor estimators of their ability level (Mitrovic, 2001). Re-reading a passage (and presumably re-watching a video) is a comparatively poor study strategy, but students tend to be believe it’s better than testing themselves (one of the best studying strategies) (Kornell & Son, 2009).
 Bethany Rittle-Johnson and colleagues have done a large body of work comparing conceptual and procedural learning (e.g. Rittle-Johnson & Alibali, 1999; Matthews & Rittle-Johnson, 2009). For learning decimal expansion, students used procedural and conceptual instruction in iteration to gradually build a better mental representation of decimal numbers (rittle2001developing).
 See Chi, 2008 for a few ways of classifying conceptual learning.
 See VanLehn et al., 2003 for a discussion.
 The Force Concept Inventory is a famous example of a conceptual assessment. It helped reveal that students who were scoring high on exams in a physics class weren’t actually learning the concepts from the lectures.