Getting beyond massively lousy online courses

Sebastian Thrun on Udacity:

We have a lousy product.

In the article, Thrun says that MOOCs, massive open online courses that gained popularity a couple years ago when introduced by professors from Stanford University, didn’t live up to their hype in democratizing education for the whole world.

Personally I’d been anticipating the start of a particular MOOC for several months–there isn’t very much educationally-oriented material on the topic in existence. Recently, on the week it finally came out, I finished Portal 2 instead of the first assignment, which involved installing, troubleshooting, and navigating a complex program and hunting down the dataset within the MOOC software–all before the deadline.

Ain’t nobody got time for that.

What can MOOCs learn from Portal 2 about making a compelling product? Let’s take a look.

Why am I playing this game at all? Plot. I’m stuck in a dystopian science facility being avenged by the evil computer system GLaDOS. The startling setting and crazy characters immediately draw me in.

Each level in Portal 2 has a clear goal: open the door. Generally I need to learn one new thing to complete the level while integrating what I’ve learned before, providing incremental difficulty. Furthermore, the environment that you interact with has many affordances, guiding you to play with tools like blocks, buttons, and magical scientific bouncy goo.

{<1>}

Even if I’ve discovered the tools to use, it takes some trial to succeed in the level. The game provides feedback when something isn’t working right: I fall into a pit and drown in toxic water instead of reaching the other ledge when I haven’t figured out how to jump far enough.

Progress is concrete: I finish a level in about 10 minutes. Further, I receive a reward at the end in the form of taunting from GLaDOS that’s genuinely funny as I ride the elevator to the next level.

Compelling plots

The “why?” of a MOOC is usually confined to the professor droning on a few minutes during the first lecture giving a list of ways the subject has been applied. There’s lots to say about storytelling, but there’s a reason that “vague list” isn’t a story archetype. Plots are, partly, about fantasy–we can put the learner in the applications and make it big and dramatic. Language learning? Take me to a foreign land. Applied math? Let me be that guy from Numb3rs. At least in college, I was a student on a four-year quest for a degree with my classmates. In a MOOC, I’m just a registered user who gets a lot of annoying emails.

Online learning has yet to go very far with this idea. One example is Codecademy, where you at least have a larger objective of completing a project.

{<2>}Codecademy's final JavaScript lesson is framed as replacing a broken cash register

Clear goals

MOOCs often ask you to complete a complex task in a complex environment. You need to switch back and forth between the software and slides for step-by-step instructions, and you don’t even understand what you’ve achieved at the end.

DragonBox teaches algebra using the principle of clear goals. Each level has the same goal of isolating the spiral, but they incrementally teach all aspects of solving algebraic equations.

{<9>}DragonBox has a clear goal: isolate the spiral (grounding the idea of 'solve for x')

Incremental difficulty

Professors seem to love to jump into applied knowledge. Before making sure you get the definition of something, they’re asking you to transform and apply it.

{<11>}DuoLingo highlights the one new word introduced in this problem

In contrast, DuoLingo succeeds in incremental difficulty: it typically presents one new word at a time.

Affordances

Check out Quill: it presents a textbox claiming “There are nine errors in this passage. To edit a word, click on it and re-type it.” I have no desire to learn anything more about grammar, yet I corrected several errors during my first visit to the page. The textbox, the existence of errors, and even the typography and the way individual words are selected when clicking, all afford me to play with it.

{<3>}Quill's interface affords testing your knowledge of correct writing

While it’s true that multiple choice prompts common on MOOCs are an affordance for providing an answer, these are generally removed from the environment and tools you’d actually be working with.

Feedback

One of my major takeaways from interviewing many users of online learning systems is that the loop of instruction, practice, and feedback is way too long. Imagine that I watch several hours of video lecture over the course of a couple days, then I come back another day to do the assignment. Of course there are key ideas in the lecture I didn’t understand or remember, so I have to go hunt them down within those hours of video. Of the dozens of concepts covered in the videos, I get about 10 questions worth of practice on the quiz. Finally, I might not even receive immediate feedback on that quiz–I have to wait until after the quiz deadline to see what I missed anything and understand why. If I even come back to look it.

Based on Bret Victor’s principle that creators should immediately see the effects of their changes, Khan Academy’s computer programming environment allows you to adjust variables in the code and see the results on screen.

{<5>}Khan Academy CS lets you adjust numerical input values and instantly see the result

In other words, you get feedback as you adjust the code. However, this feature is only responding on one very minor aspect of programming. Imagine an environment that gives feedback about a misunderstanding of conditionals or recursive, and then we’re getting somewhere. Indeed Victor responded with an article about how they got it all wrong. You should read it.

Meaningful rewards

In the Power of Habits, Charles Duhigg explains that concluding an interaction with a reward is a powerful way to instill habits. The trend of gamification has driven this effect through badges and points. But as Portal 2 shows, rewards are an opportunity to entertain and drive the plot forward, not just pad pockets with a fake currency.

CodeCombat (disclosure: friends with one of the founders) is a new effort to teach programming that uses this idea well. Once you’ve successfully programmed your soldier, you get to watch him execute his program and kill the ogre. You also get to see the “spells” that you learned in that level. It’s like collecting badges but also uses the opportunity to allow you to reflect on what you’ve just learned.

{<4>}CodeCombat displays your code execution as your character defeating the ogre

Final thoughts

Some of these principles apply to developing better tools for us to do our work. If a tool is already well designed, learning it is easier. However, it is still important to understand the learner’s state, that is differences between what different users already know and understand. Considering the learner’s state implies we should set goals of incremental difficulty and indicate and reward when those goals are achieved, just as good games put sequence levels with clear goals in incremental difficulty for the player.

There’s plenty more to consider for an ideal learning environment. I’ve written before about spaced repetition, mnemonics, and multimedia. But I believe that solid execution on these principles gets us 80% of the way there. As Sebastian Thrun’s resignation demonstrates, we have a very difficult job ahead in that.

The biased versus the heartless

Decision making is hard. For instance, we seem to be awful at making hiring decisions. Daniel Willingham explains a study that accurate answers to interviewers’ questions did not gain any advantage over random information. Google has examined the data in practice and found that structured interviews with a rubric are more effective than brainteasers. A particular example from marketing professor Adam Alter is particularly offensive: people with easier names are more readily promoted.

We like easy names. That’s a clear picture of how we are biased, emotional, and have limited processing capacity.

But on the other side of the coin, trusting decisions to computers, has its own subtle set of problems, as examined in two recent articles.

Nicholas Carr tells us that “All Can Be Lost” when we put knowledge in the hands of a machine. While computers automation in, for example, flying planes may initially seem safer and more effective, human operators meanwhile begin to lose their skills.

Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated.

And at this point the human operator is no longer capable of taking over. It’s a race against pilots losing their knowledge and technology advancing its knowledge and robustness. In this case, technology seems to be winning: air travel is already much safer than car travel and been getting increasingly safe.

What about computers as actors in complex systems? In The Real Privacy Problem Evgeny Morozov makes several points about the inadequancy of technological solutions to protecting our privacy. One in particular is we may not be able to interpret the decisions or predictions of machines. This undermines our legal and political systems that are rooted in deliberation through natural language.1

My understanding of politics is limited, but there’s an analogue in educational technology. In adaptive learning systems, computer models are used to make predictions and assist learners based on their performance. Similarly, there is backlash, such as Dan Meyer’s, that machines may be able to determine that an answer is incorrect, but they aren’t able to connect to the human mind making that mistake the way a teacher can. There are tools such as teacher dashboards, as the blogger from Pearson proposes to Meyer, or open learner models that expose the computer’s knowledge of the student such that the student can scrutinize their own model. As Meyer correctly notes, however, designing a tool that’s actually useful entails its own difficulties.

What can we conclude?

  1. Inevitably, more and more of our knowledge will be in the hands of computers. We can’t just hope this won’t happen.
  2. We must understand and codify how humans learn, that we are biased, emotional, and have limited processing capacity, but can learn complex patterns when given accurate feedback. With that knowledge, computers will be able to teach their results to humans. Morozov links to the concept of adversial design, where technology can “provoke and engage the political.”
  3. Whether decisions are made by humans or machines or some system of both, the impact of single decisions should be kept as small as possible. Mistakes can thus minimize damage and even support growth. One failing flight (out of 30,000 per day in the US)–as tragic as it may be–can teach us a great deal. (This is Taleb’s idea of antifragility.)
  4. None of this will be easy. That’s why you should hire me.

[1] As contrasted to computation. See again The Mother of All Disruptions.