Dear teachers: Khan Academy is not for you

Dear teachers: Thanks for what you do. But I have a message: Khan Academy videos are not for you. The videos are for students, and students are using them. So I think MTT2k is misguided. We should be sitting the students in front of the videos and trying to figure out what goes on in their head, rather than sitting the teachers in front of them.

Here’s why teachers won’t get it right: Expert blind spot refers to the idea that “content knowledge eclipses pedagogical content knowledge” (Nathan, Koedinger, & Alibali, 2001). EBS does not mean that teachers don’t have enough pedagogical content knowledge. It doesn’t mean that teachers (or researchers!) who know about EBS are suddenly able to think like a student. It means that when people think, they necessarily think using their content knowledge. Teachers cannot think like a student who does not have that content knowledge. Imagine a champion weightlifter just trying to imagine–with some degree of accuracy–how his barbell feels to a puny first-timer.

Your students are stacked on a motorcycle in the right lane.

The problem with MTT2k is that the teachers are trying anyway to imagine what a student is thinking when they watch one of Khan’s videos. Because teachers aren’t busy learning the material, they have all kinds of attention to direct at any detail that pops out without a complete, polished explanation. In the real world, we never have that luxury; we have to assemble our knowledge from incomplete, messy fragments.

The original–and still the best–Khan critique is Derek Muller’s commentary that Khan Academy does not address misconceptions in its videos. He compared a straightforward video introducing physics concepts to one that first introduced common misconceptions and then cleared them up by presenting the correct concepts. Although students found the first video clear and concise, they didn’t actually pay and attention and learn from it. (Wait, is that the same “clear and concise” that MTT2k producers are asking for?)

My point is not that teachers are wrong and Khan Academy is wrong and Derek Muller is right (just because we share a surname). My point is that we have to look at empirical data to determine what instructional styles actually work or do not work. So let’s answer some questions, shall we?

Can students learn from videos or even lectures in general? YES, with two caveats. The debate over direct instruction and discovery learning is long and brutal, but there are clear data that direct instruction can be effective[1], and Derek’s technique is one example of improving video instruction to overcome one thing that direct instruction opponents believe can’t be done with video.

Now, the caveats: one is that the student needs to be active and constructive when they are watching the video. The fact that they freely pull up a Khan Academy video is a good start. There’s no way to make sure this is happening 100% of the time, just like it won’t happen 100% of the time in the classroom.

The other is that students may be overestimating their confidence with the material[2]. In fact, I believe this is one of the major problems with Khan Academy’s videos. Khan, diligently working through every term expansion and long division, is just so good at making us watchers feel like we’re the ones doing the practice.

Should students start with concepts or procedures? Many students are educated without developing the kind of mathematical thinking that we mathematical thinkers would like them to have. This problem has often been attributed to a overemphasis on procedural learning in the classroom. But is the idea of starting with the concepts a form of expert blind spot? It’s a complex issue, and seems most likely that we need both to learn, depending on the exact topic[3]. Sal Khan is clearly interested in expanding Khan Academy’s conceptual video repertoire.

Should videos address student’s misconceptions? Sometimes. Derek Muller provides several compelling experiments where addressing misconceptions clearly improved performance over a straightforward. But all of Derek’s examples are areas where students typically have strong misconceptions that override their learning (kind of a fake expert blind spot). But sometimes students are really just learning something new and there are no real misconceptions to address. Sometimes they have even deeper problems that require a different approach[4].

My suggestion to begin approaching some of the problems raised above is to forget the flipped classroom, let’s flip Khan Academy. Let the practice be the guide. Students start with the practice and use it to figure out their weakness. Often, all a student needs is a flag on their error to be able to figure out the problem[5]. But not always. And then you can bring in the videos–in particular, the video that addresses exactly the incorrect or missing knowledge of the student. It is difficult but not impossible to assess the deep conceptual knowledge that we’d ultimately like to provide students[6]. And then Khan Academy can use real student data–not teacher’s rear view mirrors–to figure out which videos are not getting the point across.

Footnotes

[1]  The same Derek Muller has an excellent interview with direct instruction champion John Sweller. Klahr & Nigam, 2004 is one experiment where the direct instruction conditions outperforms a discovery learning. A stronger statement is provided by Mayer, 2004.

[2] Students, particularly low-ability students, are poor estimators of their ability level (Mitrovic, 2001). Re-reading a passage (and presumably re-watching a video) is a comparatively poor study strategy, but students tend to be believe it’s better than testing themselves (one of the best studying strategies) (Kornell & Son, 2009).

[3] Bethany Rittle-Johnson and colleagues have done a large body of work comparing conceptual and procedural learning (e.g. Rittle-Johnson & Alibali, 1999; Matthews & Rittle-Johnson, 2009). For learning decimal expansion, students used procedural and conceptual instruction in iteration to gradually build a better mental representation of decimal numbers (rittle2001developing).

[4] See Chi, 2008 for a few ways of classifying conceptual learning.

[5] See VanLehn et al., 2003 for a discussion.

[6] The Force Concept Inventory is a famous example of a conceptual assessment. It helped reveal that students who were scoring high on exams in a physics class weren’t actually learning the concepts from the lectures.

  • http://twitter.com/mpershan Michael Pershan

    It’s hard to disagree with your even-headed calls for more data. But you set a very low bar for the effectiveness of the KA videos. Do students learn from videos? Sure they do. Do students learn from 45 minute lectures? Sure they do.

    But how much? How effective is the teaching? Here you say, well, nothing works 100% of the time.

    Good teachers are claiming that what they do works more% of the time, and what Khan Academy does works less% of the time. That’s the claim, and, yes, there should be evidence brought for this. Teachers do have evidence for what they do, though. The clearest example in my mind is the way that modeling instruction has demonstrable effects on FCI performance, as does peer interaction. There is a great deal of qualitative evidence supporting what works in teaching. And while qualitative evidence is limited, itis limited in ways that compliment the limitations of quantitative evidence when it comes to studying complex human interactions.

    So, where is the evidence for the effectiveness of video, direct instruction? There’s evidence that when students are engaged that direct instruction is more effective. What else? Where is the data from Khan Academy, besides the number of students it has?

  • http://cs.cmu.edu/~ramuller Ryan Muller

    Thanks for your reply. There is a lot that I left out of the post. There are certainly better and worse teaching practices, and discussion of them is a good thing for teachers (and Sal Khan!) to participate in.

    Let me explain my point that “Khan Academy videos are not for teachers” another way. If a student watches a Khan Academy video and realizes they aren’t learning from the video, they will stop watching. If a teacher watches a Khan Academy video and thinks that Khan is using poor teaching methods, a) they may be wrong* b) they aren’t going to change the fact that Khan Academy is hosting free videos that are being viewed millions of times and that new ones are coming out faster than they can be parodied.

    So my intention is certainly not to give Khan Academy a free pass, but Khan Academy can either use their own data to evaluate the videos, or, an even better option, help students understand more accurately whether they are learning from the video. I also feel like these options are something Khan Academy would be more willing to consider because Sal Khan has made it clear that he does not want to change his teaching style.

    I think it would be futile to centrally plan a perfect alternative to Khan Academy. But if there is a better self-assessment of learning, students may then opt to explore everything else available on the web (or, you know, actually talk to their real life teacher/peers/etc).

    * Which doesn’t mean their own teaching methods are ineffective, it just means their expert blind spot may render them unable to evaluate alternative teaching methods by observation.