Design vs. learning

The angriest I’ve gotten in recent memory is when arguing with a friend about her bottled water drinking habits. It wasn’t that she drank the water that made me angry but that she didn’t want to consider any information that might suggest why she should or shouldn’t drink (as much) bottled water (if you’re scratching your head about why this may even be an issue, see for example http://science.howstuffworks.com/environmental/green-science/bottled-water4.htm). Her argument was along the lines of “I know how behavior change works. Information won’t lead to change in my behavior.” This is fascinating because it’s a failure in rationality that results from a misunderstanding of theories about failures in rationality, which have come to the attention of someone like her only recently due to the proliferation of behavioral economics.

Her reasoning is rooted in empirical research. One example of many from Nudge is where Minnesota taxpayers were given different types of information about complying with tax law. Only one group had a significant change in behavior. It was the one given a not-so-informative social cue, simply that “90 percent of Minnesotans already compiled, in full, with their obligations under tax law.” The generalization abstracted from this and many similar studies is that various social and perceptual cues are far more effective than information in producing a desired change in outcome. As a generality, I think it makes sense.

Consider this analogical experiment, which sounds like a horrible word problem come to life. A group of fifth graders were taken to a store that was having a sale. Two identical shirts were for sale, but one was 60% off while the other was 30% off with an additional 40% off the sale price. One group of children was given information about how to multiply percents. Another group of children were not given the information but rather the sign for the cheaper shirt was shown in bright colors. The result (OK, not a real experiment, but a reasonable guess): many more children in the second group got the cheaper shirt! So information sucks, right??

There are two problems with this conclusion that are much easier to recognize than the tax compliance one. First, the information is not properly presented. Fifth graders are not able to understand and apply math with percents after the presumably brief intervention provided by the experiment. The second problem is that the second group of children happened to be nudged to the cheaper shirt[1]. Generalizing this idea of “let us be nudged” relies on some unnamed party to have the good intentions toward the nudgee, the right idea about how to nudge, and so on.

When we can understand and apply information, we become more powerful and free. If we know some math, we can calculate that the first shirt is 40% of the original cost, while the second is 42%, so the first is cheaper.

There is a limit to what we can know. We may be able to, by ourselves, calculate the better deal in a store. We cannot calculate the aerodynamics of the wings of an aircraft before hoping on board. We trust our lives to the engineers, the pilots, and the air traffic controllers when we fly on a plane. Even what we learn is trusted to the planning of curriculum designers, school board members, and teachers.

Back to the bottled water. My friend, who drinks the bottled water at work, was in some sense nudged into this habit by the free and readily available water. She is even nudged into some sense of environmental responsibility due to the recycling efforts at her workplace. But assuming that she has access to tap water and not an overly biased perception of the taste of tap water[2], there is little barrier to amending this habit.

Imagine my friend was a thoughtful and steadfast environmentalist and, somehow, wasn’t aware of any possible negative consequences of consuming bottled water. I imagine that she would have quickly devoured the information I presented and took action to change her behavior. But if she doesn’t have that disposition, she will not simply act on the information alone. Like the fifth graders at the store, she must learn. If she is to be convinced to change her behavior on this issue, she must learn facts about the effects of bottled water, and, more importantly, beliefs about the importance of the issue[3]. But learning is difficult and time-consuming, and most people devote little if any conscious time to learning.

At one point she pointed to the company. “If I shouldn’t be drinking bottled water, they should do something about it.” But who is “the company”? Practically speaking, it’s probably the office manager who’s in charge of what is made available in the office kitchen. But even the office manager may get passed down instructions on what to offer based on central planning for all office locations.

My friend, empowered with knowledge, could try moving up the chain, talking to the office manager and then to the manager’s manager. In a company like hers, it’s likely that the office manager has a bit of flexibility. In the most positive front, there seems to be a movement in design towards allowing decisions to be made locally by end-users[4]. But even if my friend is designing the office kitchen herself, before (and maybe even after) her argument with me, she would choose the bottled water. First she must learn.

EDIT 7/27 – changed the wording and added footnote about what she “must learn” since multiple people were confused.

[1] There is a middle ground between knowing math and nudging someone directly to the answer, which is better tools to do (“truthful”) math without a full understanding of the underlying concepts. I’ve discussed this in past weekly reviews in relation with Bret Victor’s Kill Math project. It didn’t fit into this post, but maybe I’ll bring it up again.

[2] “But a couple of very non-scientific, blind taste tests have found that most people — or most people in New York City, to be more accurate — can’t actually tell the difference between tap water and bottled water once they’re all placed in identical containers.” http://science.howstuffworks.com/environmental/green-science/bottled-water3.htm

[3] I think what was finally a bit convincing for her was the fact about how much water and fuel are used in manufacturing of the bottle, which is not offset by recycling it. I estimated that her consumption over a year could perhaps be enough to offset what someone needs to live, say in India.

[4] Co-design and metadesign are some terms for this. I plan to talk about this a lot more in the future.

Are mnemonics a waste of time for language learning?

Suppose you want to learn to write the (simplified) Chinese character 汉 (meaning: Chinese or Han). It’s made of two components, a 水 (meaning: water), represented as the three dots on the left, and a 又 (meaning: again), on the right. To remember this character I might remember a story using the two components and the meaning of the character: “Like a tide of water, dynasties like the Han have, again and again, risen and fell.”

Such mnemonic systems are popular. I have been following the “Heisig system” of the Remembering Simplified Hanzi books for a couple years. The question for this post is, is it worth it? I assume either way you are using a spaced repetition system in a standard way.

Scenario 1: You’re taking a class on Chinese, you might have tests where you are trying to remember a fixed set of characters, and you have a good amount of time to do so. You see “Han” on the page, think about it, and get a flicker of an image of a dynasty receding like a tide. “The tide–water… again and again… yes, that’s it!” The mnemonic certainly seems useful here, assuming you spent a shorter amount of time studying than without it (probably true).

Scenario 2: You’re learning Chinese over a span of several years. You want to be totally fluent in writing. Meaning that if you’re writing out an article, you don’t want to be conjuring up a story for every character and then translating that to components. You aren’t worried about knowing specific characters for quizzes in the intermediate stages. Which of the following is most accurate:

  1. A mnemonic is worth it in terms of how quickly you can remember a character. You can get fast enough with them that you don’t need to transition from “story translation” to automaticity.
  2. A mnemonic is worth it. As you become more and more fluent, you will transition to automaticity, and the mnemonic serves as a useful scaffold.
  3. A mnemonic is not worthwhile. Because you will eventually need to learn the character writing to automaticity, the mnemonic is simply an extraneous step.

I would doubt the first explanation because I no longer need to consciously recall the story for many characters. Although it could be that the story is being somehow used unconsciously. An even more extreme position would be that any memorization, whether you use a deliberate mnemonic or not, ties itself to stories and images in your unconscious!

The usefulness could be determined if you had many participants and several years for an experiment. The best I could find in a quick literature search was from Lawson & Hogben, 1997,

There is also support for the value of using deliberate
mnemonic strategies, particularly in the early stages of foreign
language learning (e.g., Carter, 1987; Carter & McCarthy, 1988;
Nation, 1990; Oxford, 1990).

“Early stages”, so already not what I’m looking for.

Finally, the concept of desirable difficulties (Bjork & Bjork, 2011) could lend support for the third theory, but I’m not clear on what conditions that is actually applicable.

Questions for DragonBox: Can algebra be taught with a game?

DragonBox is a mobile learning app that’s getting a lot of hype. Watch the video in that article or better yet try out the game for $3. I won’t talk about the obvious pro that DragonBox is fun and motivating. Instead, as a small exercise in learning science and experimental design, I’ll go through some questions that I would like answered to be convinced that DragonBox is actually succeeding at teaching people algebra:
  • Are most players able to advance through DragonBox?
  • Do the actions in DragonBox transfer to actual algebra problems?
  • Can students apply the constraints provided by DragonBox by themselves?
  • Is the procedural learning of DragonBox inferior to conceptual instruction?

Are most players able to advance through DragonBox?

This is the question that We Want to Know, the makers of DragonBox, should be able to answer easily by themselves by looking at the data of what people do in the game. It’s also perhaps the hardest question for me to judge because I’m entirely familiar with the underlying mechanism. It certainly feels like a typical puzzle game, where one can work up incrementally. Watching at five- and eight-year olds in that video, I’d would have to guess yes for most people.

Do the actions in DragonBox transfer to actual algebra problems?

Transfer is the thorn in education’s side. Ok, you’ve taught someone something, but can they actually use that in any context except the one they’ve learned? Often not. In a famous experiment by Gick & Holyoak, 1983, most subjects were not able to solve a analogous problem to one they were just taught. And in general, the evidence for effective educational games is incredibly sparse.

But here DragonBox does exactly the right thing–it gradually introduces the real symbols in place of the dragons and boxes. It seems to be a plus for DragonBox, but there’s one more issue:

Can students apply the constraints provided by DragonBox by themselves?

In DragonBox, when adding a monster to one side, you are not able to proceed until adding the same monster to other side (thus keeping the equation equal). I refer to this as an external constraint imposed by DragonBox. This constraint and several like it make it easier to stay on the right track when solving one of the puzzles. How much does this matter?

Prior experimental evidence says it may be a lot. Zhang & Norman, 1994 performed a number of experiments varying the type of constraints that were externally represented for the Tower of Hanoi puzzle. Some of these constraints made solving the puzzle drastically easier. Why? We have a limited capacity for thinking and the constraints make some of that processing automatic or nearly automatic.

But DragonBox may be different. The constraints here are rules applied in separate steps as opposed to constraints that affect the space of possibilities one has to consider. Although it may take a bit of work outside the game to really habitualize those steps, I don’t think that diminishes the value of the other parts that have been learned.

Is the procedural learning of DragonBox inferior to conceptual instruction?

I brought this issue up in my last post on Khan Academy, but it’s worth further discussion. DragonBox teaches procedural knowledge–it says nothing about the concept behind why a dragon on top of another of the same type becomes a 1. So even if learning from the game can transfer to real algebra problems, might it be better to use conceptual instruction from the start? Even though I don’t think the concepts really shine through, I do think the procedural knowledge gained is useful as an iterative part of learning.

Some representational changes that DragonBox may afford include:

  • Thinking of the equal sign as separating an equation into two balancing sides. (concept of mathematical equality)
  • Thinking of added quantities as loosely arranged terms, where the “loosely arranged” part may help understand them as commutative. (commutative property)
  • Thinking of multiplication and division as applying to each of those terms. (distributive property)
  • Thinking of negative terms as canceling. (additive inverses)
And there are also some ideas that ultimately fall more under the procedural umbrella, like the idea of canceling additive terms before multiplicative factors when isolating the variable.

Or… do an experiment

Confession: you don’t actually have to answer any of those questions to figure out whether algebra can be taught with DragonBox. You can just give people who don’t know algebra the game and see whether they learn it without any exposure to other instruction. You can give them a test before the game and a test after and see how much they learned.

In learning science, however, an experiment will typically use some type of control condition, a group that doesn’t use the game to learn. To see why, imagine that the group playing the game (called the “treatment group”) did improve. Maybe you accidentally gave an easier test at the end, or maybe they learned everything from the pre-test or watching cartoons or something. You don’t know for sure. So you would want another group (control group) that takes the same pre-test and post-test but doesn’t play the game in between. If the DragonBox group does better, you know you’ve (most likely) got something good!

Such a design would also let you do other types of comparisons, such as comparing the game to some other form of instruction, or using the game in combination with conceptual instruction and comparing that to either alone.

In conclusion, I’m cautiously optimistic about the ability for DragonBox to improve learning, especially if it is augmented with conceptual instruction. I’m also curious what other topics would lend themselves to a game like this (check out some discussion by Terence Tao), or alternatively what kind of crazy math I can now do after playing thousands of levels of Unblock Me.

Slow web yourself: how to send daily email from Google spreadsheets

Lately I’ve been thinking about the slow web. More on that later. I also started reading Tao Te Ching and, rather than speeding through the whole thing without absorbing much, I wanted to slow down my reading to one section per day, emailed to me each morning. I figured out I could do this, like most things, with Google spreadsheets. Here’s how:

  1. Create a new spreadsheet where each row has your email address in the first column and the text of the email in the second column.
  2. Open the script editor from “Tools > Script editor…”
  3. Add the following code:
    function sendEmails() {
      var sheet = SpreadsheetApp.getActiveSheet();
      var startRow = 1;
      var numRows = sheet.getLastRow();
      var dataRange = sheet.getRange(startRow, 1, numRows, 2)
      var data = dataRange.getValues();
    
      var firstDay = new Date(2012, 6, 7);
      var today = new Date();
      var daysElapsed = Math.floor((today - firstDay)/(1000*60*60*24));
      var whichRow = daysElapsed % numRows;
    
      var row = data[whichRow];
    
      var emailAddress = row[0];
      var message = row[1];
      var subject = "Daily Tao";
    
      MailApp.sendEmail(emailAddress, subject, message);
    }
    
  4. This script will march through each row in the spreadsheet day by day. Customize the starting date by changing the “firstDay” variable. The first value is the year, then it is the month minus one (July is 7, 7-1=6), then it is the day (so July 7, 2012 here). The “% numRows” makes it loop after getting through all the rows. You can remove that if you just want it to stop.
  5. Customize the message subject (currently “Daily Tao”).
  6. Now you need a trigger to run the script every day. In the script editor, go to “Resources > Current script’s triggers…”
  7. “Add a new trigger”. Change “From spreadsheet” to “Time driven”, then select “Day timer” and then choose the approximate time of day to receive the email. Press “Save”.

That’s it! You should be getting an email each day now. Another option is to send yourself a random message instead of going in order. Just change these four lines,

  var firstDay = new Date(2012, 6, 7);
  var today = new Date();
  var daysElapsed = Math.floor((today - firstDay)/(1000*60*60*24));
  var whichRow = daysElapsed % numRows;

to

  var whichRow = Math.floor(Math.random()*numRows);

Dear teachers: Khan Academy is not for you

Dear teachers: Thanks for what you do. But I have a message: Khan Academy videos are not for you. The videos are for students, and students are using them. So I think MTT2k is misguided. We should be sitting the students in front of the videos and trying to figure out what goes on in their head, rather than sitting the teachers in front of them.

Here’s why teachers won’t get it right: Expert blind spot refers to the idea that “content knowledge eclipses pedagogical content knowledge” (Nathan, Koedinger, & Alibali, 2001). EBS does not mean that teachers don’t have enough pedagogical content knowledge. It doesn’t mean that teachers (or researchers!) who know about EBS are suddenly able to think like a student. It means that when people think, they necessarily think using their content knowledge. Teachers cannot think like a student who does not have that content knowledge. Imagine a champion weightlifter just trying to imagine–with some degree of accuracy–how his barbell feels to a puny first-timer.

Your students are stacked on a motorcycle in the right lane.

The problem with MTT2k is that the teachers are trying anyway to imagine what a student is thinking when they watch one of Khan’s videos. Because teachers aren’t busy learning the material, they have all kinds of attention to direct at any detail that pops out without a complete, polished explanation. In the real world, we never have that luxury; we have to assemble our knowledge from incomplete, messy fragments.

The original–and still the best–Khan critique is Derek Muller’s commentary that Khan Academy does not address misconceptions in its videos. He compared a straightforward video introducing physics concepts to one that first introduced common misconceptions and then cleared them up by presenting the correct concepts. Although students found the first video clear and concise, they didn’t actually pay and attention and learn from it. (Wait, is that the same “clear and concise” that MTT2k producers are asking for?)

My point is not that teachers are wrong and Khan Academy is wrong and Derek Muller is right (just because we share a surname). My point is that we have to look at empirical data to determine what instructional styles actually work or do not work. So let’s answer some questions, shall we?

Can students learn from videos or even lectures in general? YES, with two caveats. The debate over direct instruction and discovery learning is long and brutal, but there are clear data that direct instruction can be effective[1], and Derek’s technique is one example of improving video instruction to overcome one thing that direct instruction opponents believe can’t be done with video.

Now, the caveats: one is that the student needs to be active and constructive when they are watching the video. The fact that they freely pull up a Khan Academy video is a good start. There’s no way to make sure this is happening 100% of the time, just like it won’t happen 100% of the time in the classroom.

The other is that students may be overestimating their confidence with the material[2]. In fact, I believe this is one of the major problems with Khan Academy’s videos. Khan, diligently working through every term expansion and long division, is just so good at making us watchers feel like we’re the ones doing the practice.

Should students start with concepts or procedures? Many students are educated without developing the kind of mathematical thinking that we mathematical thinkers would like them to have. This problem has often been attributed to a overemphasis on procedural learning in the classroom. But is the idea of starting with the concepts a form of expert blind spot? It’s a complex issue, and seems most likely that we need both to learn, depending on the exact topic[3]. Sal Khan is clearly interested in expanding Khan Academy’s conceptual video repertoire.

Should videos address student’s misconceptions? Sometimes. Derek Muller provides several compelling experiments where addressing misconceptions clearly improved performance over a straightforward. But all of Derek’s examples are areas where students typically have strong misconceptions that override their learning (kind of a fake expert blind spot). But sometimes students are really just learning something new and there are no real misconceptions to address. Sometimes they have even deeper problems that require a different approach[4].

My suggestion to begin approaching some of the problems raised above is to forget the flipped classroom, let’s flip Khan Academy. Let the practice be the guide. Students start with the practice and use it to figure out their weakness. Often, all a student needs is a flag on their error to be able to figure out the problem[5]. But not always. And then you can bring in the videos–in particular, the video that addresses exactly the incorrect or missing knowledge of the student. It is difficult but not impossible to assess the deep conceptual knowledge that we’d ultimately like to provide students[6]. And then Khan Academy can use real student data–not teacher’s rear view mirrors–to figure out which videos are not getting the point across.

Footnotes

[1]  The same Derek Muller has an excellent interview with direct instruction champion John Sweller. Klahr & Nigam, 2004 is one experiment where the direct instruction conditions outperforms a discovery learning. A stronger statement is provided by Mayer, 2004.

[2] Students, particularly low-ability students, are poor estimators of their ability level (Mitrovic, 2001). Re-reading a passage (and presumably re-watching a video) is a comparatively poor study strategy, but students tend to be believe it’s better than testing themselves (one of the best studying strategies) (Kornell & Son, 2009).

[3] Bethany Rittle-Johnson and colleagues have done a large body of work comparing conceptual and procedural learning (e.g. Rittle-Johnson & Alibali, 1999; Matthews & Rittle-Johnson, 2009). For learning decimal expansion, students used procedural and conceptual instruction in iteration to gradually build a better mental representation of decimal numbers (rittle2001developing).

[4] See Chi, 2008 for a few ways of classifying conceptual learning.

[5] See VanLehn et al., 2003 for a discussion.

[6] The Force Concept Inventory is a famous example of a conceptual assessment. It helped reveal that students who were scoring high on exams in a physics class weren’t actually learning the concepts from the lectures.