Weekly review through January 29

Shuff as addiction, and being a better grad student

When I want to feel productive and don’t have more pressing matters, I read an article. Here’s how:

  1. For a given article that I want to read, I use researchr to generate wiki page with a link to the PDF on my local machine. I add a link to the wiki page on Shuff, my productivity manager.
  2. When I want to start reading the article, I go to the Shuff task and either choose a link or click a button to pull up a random one (hacked into Shuff using dotjs).
  3. Read, highlight, regenerate the wiki page, and summarize. If I finish before the 25 minutes allotted with Shuff, I’ll move on to another one!

This activity has many habit-forming properties (tying in temporal motivation theory (Steel, 2007): fixed time, tangible reward (removing the item from list), perceived high-level reward (gaining some knowledge of the field), and an endless selection (I usually add a couple new papers from the citations each time).

Is it a good thing? Several students have told me they don’t read enough, but I’ve also heard not to read too much, or else risk stifling your own ideas. Not to mention the better things that could be done in that time. I’ll wait to see the proof that reading too much actually impedes creativity, but I do believe that the time could be better spent. CS professor Matt Might claims that grad students need to read about 50 to 150 papers for thesis work and suggests to strategically avoid learning too much. How many have I read, with not even one year down? No comment…

So I’m wondering whether I can extend the activity that I’ve created for reading papers  to acquiring research skills, generating ideas, making things, and writing. Here are some thoughts so far:

  1. An idea from a professor this week on getting more from the paper reading: skip the introduction and conclusion and dive right into the methods. See what the authors actually did rather than what they claim to have done. This is the opposite of the approach I’ve been taking — why not just get the digested version, plus all those juicy citations in the intro? But by reading the methods, I can practice interpreting experiment results and better understand the limitations of the study, which authors often underemphasize. It’s especially useful for a field like learning science where you can find many papers with seemingly contradictory claims. By comparing the methods in depth, a better understanding can sometimes emerge.
  2. One reason I read papers instead of doing research is a lack of confidence in my ideas. In other words, I’m guided by a invisible scripts: “I need a good idea before I should start working too hard on it” on top of “I’m still new to this field; I don’t know what makes a good contribution.” Recently I’ve made progress: I conceived, researched, wrote, and submitted a workshop paper in less than a week. I can’t make any claims about the quality, but it’s more output than the months I spent reading literature and trying to come up with better ideas. One strategy from this: treat ideas like great ideas. Try to make a opportunity to contribute it somewhere, but if that doesn’t exist, just pretend like it does. It certainly makes some sense to get a little vetting of an idea to make sure there is some potential of a new contribution, but I think I’m clearly on the other side of that balance right now.
  3. Collaborate more! For students who have gone through the whole research cycle, there’s a pretty simple algorithm: keep doing it. For students like me, following the footsteps of someone else who has may be the best option. Actually I’ve been doing this, but I need to make it more of a routine again. In fact, I had another paper, submitted on the same day, based on several weekly meetings last semester.

So, I plan to remind myself to read methods sections first when it makes sense, to allocate some time for “fake paper writing”, and to try to schedule a regular meeting time with some other students. In what time is left, I’ll keep reading. :)

Short thoughts

  • With Stanford professors bringing us Udacity AND Coursera, online learning is continuing to heat up like crazy.
  • More on metacognition: It even works at the brain-level by self-regulating based on real-time fMRI feedback (McCaig et al., 2010). The idea of up- and down-regulation reminds me of the left and right-brain switching, so I’m curious if there is modern neuroscience research along those lines.
  • Cool, recent paper on external representations: Kirsh, 2010. A good starting place to see why this stuff can be so important. According to Kirsh, it enables us to think the unthinkable. There are implications for AI too. A computer may be better than a human at simulating a consistent environment than we are, but there are still limits to the complexity.

Weekly review through January 22

Visual thinking and representation: A bibliography

Last time I said I would go further into the use of external representations. It’ll be quite a journey. Here’s how the itinerary looks like so far:

  • Marr, 1982Vision: A computational investigation into the human representation and process of visual information – Marr seems to be one of the first to put vision in computational terms, and his framework is often referenced.
  • Tufte & Howard, 1983The Visual Display of Quantitative Information – Not that heavy in cognitive theory, but it’s an important work including history and principles about, yes, the visual display of quantitative information (graphs, etc.).
  • Gibson, 1986The Ecological Approach to Visual Perception – Gibson thinks about perception as part of a process of an individual interacting with the world, essentially setting the stage to think about visualization in interaction design.
  • Kahneman, Treisman & Gibbs, 1992 – The reviewing of object files – and Rensink, O'Regan & Clark, 1997 – To see or not to see – Two sources about how perceptual works with regard to working memory and dynamic information, including effects like priming and change blindness.
  • McCloud, 1993Understanding Comics – A look into comics as sequential images that convey meaning and presents a spectrum of abstractness of representation.
  • Zhang & Norman, 1994 – Representations in distributed cognitive tasks – and  Zhang, 1997 – The nature of external representations in problem solving – Compares problem solving on isomorphic problems varying which rules are represented externally or internally, or different representations are used. Externally represented rules make problem solving easier, probably because we can use them to save on the cognitive load from internal processing. Different representations can also cause different biases in problem solving.
  • Glasgow, Narayanan & Chandrasekaran, 1995Diagrammatic Reasoning: Cognitive and Computational Perspectives – Collection of articles on using visualizations to think and solve problems. Makes the case that AI requires perceptual mechanisms in addition to logical ones to be complete (according to Simon, these are logically but not computationally equivalent – a perceptual process may be much more efficient).
  • Robertson et al., 1998 – Data Mountain: using spatial memory for document management – Demonstrates the power of spatial memory with an application that helped people find documents rapidly
  • Gibson & Pick, 2000An Ecological Approach to Perceptual Learning and Development – Applies some of the ideas from Gibson, 1986 to learning and development.
  • Ware, 2004 –  Information Visualization: Perception for Design –  Infoviz is a major subfield of HCI. Ware covers a lot of techniques and cognitive background.
  • Victor, 2006 – Magic Ink – Investigates the role of information display — as opposed to interaction — as an interface.
  • Moreno & Mayer, 2007 – Interactive multimodal learning environments – Discuss the dual-mode nature of human cognition and similar considerations for learning with multimedia and interactive systems.
  • Goldstone, Landy & Son, 2008 – A well grounded education: The role of perception in science and mathematics – and Kellman & Garrigan, 2009 – Perceptual learning and human expertise – Make the case that perceptual learning is central to learning in general.

<3 library

I mentioned I was confused about whether there were representations beyond symbolic and perceptual. Here is a case for visual and verbal: we can process visual and verbal information simultaneously without one taking up cognitive load from another (Moreno & Mayer, 2007). Alternatively, Glasgow, Narayanan & Chandrasekaran, 1995 mentions cognitive, perceptual, and motor as the types of representations used by a simulated agent in a world.

Within perception, I found a useful breakdown of concepts, also in Glasgow, Narayanan & Chandrasekaran, 1995:

  • Seeing
  • Problem solving by using vision on the world
  • Drawing
  • Using drawings for problem solving
  • Imaging (meaning the use of mental images)
  • Using mental images for problem solving
What’s so special about using visual information? The Zhang papers explore how it important it can be for problem solving, and Ware, 2004 expresses it well: “[T]he world ‘is its own memory’ (O’Regan, 1992). We perceive the world to be rich and detailed, not because we have an internal detailed model, but simply because whenever we wish to see detail we can get it, either by focusing attention on some aspect of the visual image at the current fixation or by moving our eyes to see the detail in some other part of the visual field.” In contrast, Scheiter, Gerjets & Catrambone, 2006 found in one learning task that mental imagery was equally effective as static images (both more than text-only or animations). That could just be a limitation in the total amount of information conveyed and how much discovery it afforded.

If using visual information is important, than producing it may be as well. Recall that this whole series started when reading Drawing on the Right Side of the Brain. In my first set of readings for the Design Perspectives in HCI course (which I’ll have much more to say about later) Fallman, 2003 claims sketching is a foundation of all forms of design. Tony Buzan promotes mind-mapping as an all-encompassing learning technique. Glasgow, Narayanan & Chandrasekaran, 1995 should prove to have more insights.


Another theme this week was metacognition, particularly in self-assessment and reflection. I won’t repeat my literature review here since I already wrote it in a paper. :) But I’ll ruminate a bit.

It seems clear that reflection can be beneficial at the “macro” level such as strategizing how much time to spend studying based on your progress and goals. What seems to warrant further investigation is how reflectiveness affects things at the “micro” level. Does being more reflective help one pay attention to the right things or be more receptive to learning new information? To changing one’s previous misinformation? Do better effects from reflection come from preparing before a learning activity, by being highly conscious during the activity, or by reviewing the results after an activity? Must reflection come from “within” or can a scaffolded interface prompt the same benefits of conscious reflection?

I thought about this with regard to my Chinese character practice on Skritter. I’ve noticed that I can either be flying through reviews or be highly conscious, trying very hard to recall mnemonics or whatever else might lead me to answer — and much slower. I can’t decide whether the usefulness of “thinking very hard” is just an illusion. Intensity seems to be a characteristic of deliberate practice, but is it just the difficulty of the items selected or is it really the mental effort?

If the conscious effort makes no difference, then it should be minimized because it is highly inefficient. One area where such effort is encouraged by the interface is the when, instead of writing a character or tone, I have to produce the definition. Instead of just writing off the bat, I have to stop and think whether I know it enough such that I’m ready to display the real answer. Another is in deciding whether to add a mnemonic for a character. To eliminate that mental effort, the interface could automatically ask me to add a mnemonic if I’m missing the character frequently. Perhaps a way to have both the speedy flow and the benefits of reflection are to go through a patch of characters quickly and then be able to review the ones I’ve studied, see which of those I missed, and add mnemonics when I want. Showing the characters together like that has the added bonus of being able to compare between them.

There is some overlap with the representation stuff. The visualization of data to bettr support reflection is big area. Another possibility is reflecting through re-representing the knowledge. Is that worth letting people do on the individual level, or would it always be the case that an individual’s “better representation” should have been presented in the first place?

Weekly review through January 15

Representation: symbolic or otherwise

Pulling together some more references on a very recurrent theme…

Koedinger, Alibali & Nathan, 2008, looking at how students solve algebra problems, found that word problems (which are said to have grounded representation) can be easier than the same problem in symbolic form (abstract representation), confirming a result (Koedinger & Nathan, 2004) that I discussed previously. However, when the problem is complex in a particular sense (an unknown is double-referenced), the symbolic form becomes easier.

simple word problem < simple symbolic problem < complex symbolic problem < complex word problem

The result seems to confirm an intuitive notion: our “everyday” reasoning is fine for certain types of problems. But some things are complex beyond what are minds are readily capable of, so we must learn something new: a symbolic language.

Another way to consider the problem is the notion of transfer. When can something we’ve learned be used beyond the original context in which it was learned? With a mathematical background, I’m biased to feel that pure symbolic abstraction is a holy grail: perfectly transferable across domains (assuming, of course, you can figure out a model).

Goldstone, Landy & Son, 2008 took issue with this conclusion. I’ve referenced Goldstone before: even in very symbolic scenario (simplifying an algebraic expression), we apply perceptual biases (grouping terms that are closer together) (Landy & Goldstone, 2007). Here he claims that we learn best when interpreting perceptual patterns in grounded representations. Since Hadamard, 1954, we’ve known this is true even among professional mathematicians. I think that mathematician and programmers, looking deep inside, would lie down their equations and languages in a heartbeat if it weren’t for the power they wield.

This too makes sense. We first approached artificial intelligence by making computers really good at symbols and logic. That didn’t go very far. It’s clear that we use a lot of other mechanisms in our reasoning. (Gary Marcus has some popular science along these lines that might be good reading.)

Let me conclude this and move on for good: abstraction is powerful, but humans learn and work better with grounded representations. Now we can begin to explore how those representations should be presented.

(It just occurred to me that I’ve used abstraction and symbolic representation interchangeably, but they are not the same. Written language is symbolic, and we handle it relatively fine. I think the real issue is abstraction or formalism rather than “symbolism.” This may have deeper implications, so I might not be over this topic after all.)

Zhang, 1997 presents a framework for the use of external representation in problem solving. First, he shows that representation matters: in four isomorphs of the tic-tac-toe game, players can learn to win based on what perceptual biases are afforded by the representation.

And now a story (I’ve been wondering if “narrative representation” could be considered a separate type of grounding that appeals to human nature, but I digress) where I was thinking about abstraction and external representation:

I played Clue this week, which I haven’t done in probably fifteen years. I quickly realized a rather abstract goal, which is to maximize the amount of information gained from each turn (also important is minimizing information given to other players, but I figured the first was enough to figure out in one game). A great way to gain information requires overcoming congruence bias: rather than trying to confirm that someone has a certain clue, instead try to skip over as many people as possible. Each time you skip someone, you know for certain that there are three clues that they don’t have. The little sheet was helpful as an external representation: for each clue, write down the people who don’t have it. Another good method came unexpectedly just from perceiving the representation: I had also written at the top of the page the three possible clues a player could have when showing a card. I then crossed out a clue whenever their name appeared for it below. When just one remained, I had a definite clue! 

I would probably be able to pick up these strategies from playing a few times without having abstract goals combined with a good external representation. But I was able to win in one game with complete (minus the fear that I screwed up what I wrote) confidence in my conviction, so I think it speaks well for the power of representations.

Learning from video

I said last time that video seems promising, but the learning benefits are unclear. On the test prep site Grockit, learners were found not to use or benefit greatly from video instruction (Bader-Natal, Lotze & Furr, 2011). The author of that paper comments on similar results in a Khan Academy classroom study: students aren’t using the video much in favor of the exercises.

It seems that videos should be able to take advantage of the success of worked examples. Studies have found worked examples to be more effective than the hint-system of a tutor (Ringenberg & VanLehn, 2006, Schwonke et al., 2009). But in most worked example studies, students work on problems interleaved with the examples or solve steps gradually faded away from the examples or are prompted to explain each step. In other words, it’s still an active process compared to simply watching examples in videos. However, Paas & Van Merri\"enboer, 1994 found that students did better just by reading examples.

I’m not convinced it would work in general. The authors note that students were highly motivated due to a monetary reward for higher scores. I’d also be curious how results hold up in the longer term. I’m waiting to see if I get any interesting results from my own studies.

Tao and learning

I finished The Tao Is Silent this week. I couldn’t reproduce the ideas in short order, so I recommend reading it for an introduction to Taoism. One thing that bothers me is an old debate: “Nature is good. Humans are natural. But some things that humans do are more natural than others.” The author would perhaps answer that we can simply intuit what is right. For example, he makes some arguments against fad diets based on doing what he likes. Well, in the 35 years since the book was written, it’s gotten easier to get yourself in a really bad place doing that. Society places us in extraordinary non-Tao circumstances.

I think Taoism fits well with the “natural learning” philosophy (I don’t know if there’s a better name for it. The paleo movement is similar in some sense, or any “evolutionary X”.) that I introduced last week, but that we do have an answer to the conundrum: understanding natural things by their evolutionary origin and then how they fit into our culture.

I do like the idea of alternating Western notions of discipline, morality, and order. The idea of Kill Math, and really any interaction design, is to make it easy to do the right thing. Maybe that should be: natural to do the right thing.

Does this seem like a paradox? That we are creating artificial things to make the world more natural? It’s like I said last time: we only need these artificial things where society demands we forego our nature.

Short thoughts

  • Early, early version of my HTML Tutor project is on Github. It uses the awesome open-source Ace editor to let you code websites in the browser. That’s already available with jsFiddle and its many clones, but I’ll be adding (audio-based) instruction. I also have an early “codecast” tool for it. It’s like screencasting but records your keystrokes so that the text can be given directly and in real-time to the person you’re teaching.

  • Pictured is my Shuff graph of daily productivity. What you should notice is the huge areas of blue on the days labelled 67 and 68. That’s where I discovered an amazing fact: I can do more by doing less. I restricted my work day to strictly stop at 6 pm, but otherwise made no deliberate changes. My research productivity jumped (compared to any other full day of computer-based work, such as 65). It’s much more motivating and focused to have an end to the day.
  • I started a spreadsheet to track food costs, protein, and calories. So far I’ve learned that whey protein does give me the best protein per dollar though milk and chicken breasts come close. I’d like to learn some mad Excel skills to be able to calculate moving average of prices, compute total costs and nutrition for meals, etc.

Weekly review through January 8

More on natural learning

Human knowledge becomes more difficult to learn as time progresses not only because it expands but also because it bootstraps away from the innate capabilities of our evolved mind.

Society evolves too, but capitalism is not a fitness function for overall human happiness. Our society in its current state favors individuals with exceptional knowledge about using technology to generate wealth, leading to massive wealth inequality (Brynjolfsson & McAfee, 2011).

In short, learning technology is difficult but important. We can progress toward a better condition from two angles: designing technology to better fit within human capabilities and improving education through a focused attack on the difficult problems.

The Scratch interface

Take programming for example. Attempts have been made to humanize the practice of programming through visual programming tools including MIT’s Scratch, Carnegie Mellon’s Alice, and Greenfoot for Java. However these tools are largely considered “educational use only.” As one recent counterexample, the ubiquitous Bret Victor brings us Substroke, a robust tool for creating dynamic pictures.

Smaller changes can also make progress — syntax highlighting affords greater use of perceptual abilities, watching variables in a debugger augments our cognitive processing limits.

Assessing learning interventions

Khan Academy and Codecademy are raising millions and making educational material available online to large audiences. Debates about whether these efforts are effective can be illuminated by how they approach the difficult problems of math and programming.

Khan Academy has video lessons and practice problems. Codecademy and similar sites, such as Code School, Team Treehouse, and Bloc, use either video or text lessons along with browser-based interfaces that process code typed by the use. Underneath the technology, we can say that there are just two elements to all of these sites: instruction and practice. We will ignore badges.

Codecademy: Will coding in the browser unlock the secrets of the universe?

Some claim that Khan Academy is doomed to failure because the lecture is dead. Physicist Eric Mazur’s story of killing his famous lectures in favor of small group problem solving is often repeated. Of course, none of these sites use traditional, long-form lectures, so the point is lost. In fact, video, while not superior to classroom learning (Clark et al., 2010), is particularly effective for instruction (Mayer & Moreno, 2003) because it takes advantage of certain properties of cognition. And if the point of education is to teach knowledge and skills that we are not born with but are important for society, it is clear than some form of instruction must occur. Pure discovery learning has long been discredited.
“Instruction” is not an answer either — entire careers are invested into finding the best instruction for any one topic. For instance, the Khan Academy critics are touting a study that found a more effective way to teach concepts is to explicitly address misconceptions (Muller, Sharma & Reimann, 2008), but this only makes sense where there are clear and common misconceptions.
My intuition is that a general approach may be a tightly coupled instruction and practice loop. Even for pure factual learning (“constant-constant”), the testing effect (Roediger & Karpicke, 2006) is strong and can be promoted by flashcard-based practice (Kornell & Bjork, 2007). Kellman, Massey & Son, 2010 developed perceptual learning modules for mathematics with great results through practice on specific perceptual skills such as identifying valid algebraic steps. Compare this to Khan Academy, where, in the best case, the student watches one complete example then goes to do a set of problems that are vaguely related. Are they able to observe and apply the important properties of the example?
I will have to return to discussing Codecademy and its ilk. While they have the instruction and practice loop, it remains to see whether either are well-designed. And don’t get me started about ignoring forgetting effects.

Short thoughts

  • I ran my first study this week! My hypothesis was that I wouldn’t learn anything from it. It was just barely falsified: I learned that running a study isn’t so bad, even when the results aren’t informative. However, I never want to design condition-balanced assessments again.
  • I will stick to Evernote for capturing my weekly review items. Despite the fact that the Chrome web clipper breaks repeatedly.
  • Neal and I had some intriguing discussions about a writing tutor. There ought to be many opportunities to assist perception and practice for writing skills. I plan to eventually read some books about writing to investigate further.
  • I plan to start tracking protein and maybe total calories when I get back to lifting weights. For the desired 170+ grams of protein per day, I’d need to find cheap and convenient sources. Robb Wolf has discussed cheap Paleo foods.
  • I’d like to have Chinese TV always running on my new TV when I’m in the house. Not for lack of trying, the best solution I can think of is to buy a netbook or nettop (or PS3) to keep hooked up to it and playing from video sites.
  • I moved my personal site and converted my resume to a Yaml file from which I generate both a PDF and the HTML. I was going to write up how I did that, but better yet, here’s a GitHub repository from someone else doing similar.
  • I’ve been reading some fiction (Murakami’s Hard-Boiled Wonderland) this week — it’s refreshing. Although I can’t help but to keep thinking about perception and literary symbols. And by some coincidence the plot seems to revolve around some sort of split brain scenario.

Perceptual learning in Stepmania

In high school when a bunch of my friends and I played Stepmania, we used to make fun of one guy a little who would ask the better players for tips all the time. For example, they’d suggest hitting the keys harder, and he’d come back later saying, “I’m hitting the keys as hard as I can, and I’m not getting any better!”

Indeed, Stepmania and other music rhythm get swarms of newbies begging the experts for anything to help them. The only advice that has held up is “Practice. A lot.” But, as I plan to talk about more in future posts, there’s a difference between “practice” and practicing a targeted skill that will yield enormous gains. When I used to get destroyed in Beatmania by gaming guru Sean Plott, he once suggested the same (and now he lives out that advice through the targeted learning and effective practice that he teaches Starcraft players in his Day[9] Daily show).

Can we figure out a practice shortcut for Stepmania?

I was influenced by research in perceptual learning that found that people learn relative time between items rather than particular auditory or tactile signals. To improve in timing accuracy, we need to learn those spacings to a very high degree of precision. In Stepmania there are many song choices with hundreds of notes each. Memorizing the spacing patterns of a particular song may be a contributing factor, but there are just too many — what’s needed is a strong ability to adjust on the spot. Luckily, there is instantaneous feedback: arrow judgements rating each finger press with a score from “Miss” to “Flawless”.

Perfect: not good enough

Now, you need to be able to compare those judgements to your actions. You have three choices: watch the arrow animation after you hit the key, feel the tactile feedback from hitting the keys, or listen to the beat produced from your fingers. I don’t think anyone does the first. Experienced musicians may benefit more from the second. But I would say the third is far and away the best method because it’s the most salient way of getting that timing information. Yet so far I have done this very little. It’s much more tempting to listen to the real music. This finally explains the advice to hit the keys harder — it just doesn’t do any good unless you listen to them!

I tried it out on one of my most frequently played songs: I listened to the beat made by my fingers, carefully watched the arrow judgements, and let the real music fade away. The performance from my previous best of 74% went to 86%, or eliminating about half of the mistakes to a perfect play. That’s with zero practice of this newly developed skill. What may be more amazing to some people is that I didn’t make this simple connection before, but I think it really illustrates the power of observing the right thing as a lesson from perceptual learning.

Weekly review through January 1

The blog

Wee, it’s a blog! From 2012! I just came up with the new title and tagline (the old one: “Thinking about thinking, learning to learn.”). I think it captures pretty well the way I’ve been thinking about learning and human computer interaction in the last couple months. Basically, I don’t quite support the idea that people are going to start using technology to do great things just because it’s there. But people learn effectively and happily when it happens the right way, so the challenge is to figure out what that right way is and enable it with technology.

The blog is mostly for personal benefit, and I’ll talk a lot about systems that I use for my own learning and productivity, but I try to make insights that will generalize to others’ benefit. I’d love to hear what topics may be more useful for others as well as areas where I may be misguided.


End of the “Split Brain trilogy” isn’t going to happen this week unfortunately, but I will continue the discussion of learning symbols from last time. I mentioned Bret Victor’s Kill Math project, which aims at replacing symbolic math in everyday contexts with richer, more relevant, and more comprehensible interactions. This week I found a critique of Kill Math via Dan Meyer’s blog. Meyer quotes the following:

If our goal is to empower students to do more and more interesting mathematics, we can’t just hand them simulators and tell them to go play: we need to teach them how to create those simulators. Doing that requires a lot of math and a lot of programming. So Victor’s “simulation” model of doing math ultimately requires teaching kids a lot of traditional mathematics.

He notes that replacing “simulators” with “calculators” invokes an old debate. Now try replacing “mathematics” with “driving” and “[create] simulators” with “[build] engines”. If we have tools that give you the results you need without symbolic math, why should everyone be forced to learn it? We spend years and years teaching it in schools and still most people struggle to use it in everyday life. We certainly need people who do understand math, just as we need people who can build engines. But the most important thing is to have cars that are easier and safer for getting someone from point A to point B. I think this illustrates one of the central themes of interaction design, as I read in Cooper, 2004 a couple weeks ago: understanding the goals that a person is ultimately striving towards rather than the tasks that currently must be executed.

On the other hand, I strongly believe that mathematical and quantitative thinking are very beneficial, and I think there is a deep question here of whether some form of these simulators can actually support that better than struggling to learn the symbols. In other words, if a person wants to solve a real quantitative problem, are they better off with a math symbolism skill set or a (hypothetical) powerful simulation tool.

Word problems seem like a place to start the investigation. Although the ones in textbooks are usually contrived, the first natural step would likely be verbalizing a problem. Heffernan & Koedinger, 1997 found that symbolic production from these problems is generally more difficult than comprehending the story (and comprehension obviously isn’t an issue if you’re verbalizing the problem yourself).

Despite the difficulty with symbolic production from word problems, Koedinger & Nathan, 2004 found that word problems are less difficult than equivalent problems in symbolic form, which was very surprising. How could it be harder to solve a symbolic problem than to convert a problem to symbols and then solve the same symbolic problem? The answer is that students find the symbolic method so hard that they skip it altogether, which is only possible through guessing and checking with the word problem.

My prediction is that the winner is a simulator that allows an interaction based on the more intuitive guess and check, while providing constraints based on the symbolic facts. It would take a lot of work to produce something that is flexible to work with fairly arbitrary problems, but I think it is a direction worth pursuing.

A few weeks ago, Neal sent me some thoughts about math education along the lines of reducing core math requirements, replacing that time with projects where the skills are applied. I responded with an idea of splitting education into a whole-group focused socialization and core skills phase and an individualized, specialized phase. Currently, this occurs somewhat with undergraduate education being a slow transition from the first to the second. Although the first phase is much better executed in Asian nations (Stevenson & Stigler, 1994), and the second only exists in grad school. I’d also say the shift should happen earlier: by early high school everyone is bored in their math classes whether because it’s too easy or too hard.

PD/productivity progress

  • I spent some time on an annual review as described on the Art of Non-conformity blog. I found it hard to imagine a lot of what I’d be doing in 2012 — at the end of 2010 I had no idea where I’d be for grad school or what I’d be doing. But it was nice to set up some big quantitative goals (3000 Chinese characters, woo) and some major milestones with deadlines. I ended up with a nice format I think. Contact me if you’d like some examples from it.
  • And the theme of 2012? The year of design. I’m aiming for the triple threat: designing, implementing, and user testing. The implementation phase is what I value the most, but I’m looking forward to learning more about the design process and validation methods. I made two demos and Shuff improvements over the last week, which were all really fun. I even did a little user study with my dad. What I should be achieving in the medium term is submitting papers, but since I don’t know how to do that very well, I’m going to focus on my strengths, and hope my advisor can help me sort it out. Anyway, I want to design and create in the long term, so we’ll see whether that means I’m in the right place.
  • I’m enjoying Evernote, which means taking content away from Researchr. The way I’d really like to use it is to be able to go through all notes since my last weekly review and get a really good sense of what I was up to that week. As I said Researchr is great for articles and bibliography management, ut the fact that I can go through everything I added to Evernote quickly and visually in chronological order is indispensable, and the ability to quickly view and rearrange notes and notebooks is more satisfying than the wiki pages. I can think of ways to get information managed by Researchr into Evernote fairly easily, but not how to reproduce some of the features of Evernote in Researchr. So it looks like I’m going to try to switch to Evernote as a primary knowledge manager. I think the fact that I end up with a little less content on my web-based wiki is OK because I’m focusing on having that information in a more digestible format through the weekly review.

Weekly review through December 26

Originally written December 26, 2011.

The Power of Full Engagement

Personal development book, second reading (Loehr & Schwartz, 2006). The big idea is that managing your energy is essential, and it rings true as I spend a lot of my time disengaged and lethargic. When I’m energized and engaged, I’m happier, I make others around me happier, and I do more interesting things, which lengthens the effect.

The main metaphor is to an athlete who improves through periods of training — particularly, training that pushes limits[1] — and rest. Somehow the main tool for this is developing rituals designed to help improve performance along several targeted areas. By ritual, I mean some well-defined activity that is highly prioritized at a specific time once or twice a week.

I recall in The Art of Learning by chess-prodigy-turned-tai-chi-master Josh Waitzkin[2] that one of the main concepts was developing a ritual — playing the same song, meditating the same way, eating the same snack — that would lead to better performance. This occurred to me when I was in an airport and making some good progress as I usually do. Is it because, despite all the stress and headache, that the ritualistic nature of the whole process leads to a better state of mind? Or is it just that I’m removed from distractions, particularly the computer?

It’s worth trying next semester. This reflection writing is one to start. I’ll target physical energy with weekly In the Groove, which is cardio that’s actually fun. Targeting engagement in social activity seems important for me though I already have a number of semi-regular rituals. Another one that I’d like to instate is a weekly meal with my cohort in the department. Finally, to work towards focus and mental energy, I’ll start a air travel-inspired weekly retreat to somewhere distraction-free to work, similar to Cal Newport’s Adventure Studying.

Learning in Zen in the Art of Archery

(clips/notes coming later)

Ritual also shows up in this early Western account of zen, where the author, Eugen Herrigel, trains in archery from a Japanese master. The ritual practice described is where the master engages in some seemingly mundane activity at the beginning of class and the students follow along exactly. They aren’t learning per se, but rather preparing the mind for the lessons to come. Again, this reminds me of the exercises in Drawing on the Right Side of the Brain where the right brain is invoked through activities like contour drawing. While I’m sure there is a complex process of neurochemistry in each case, if it works, it works. The state of mind is described as that “in which nothing definite is thought, planned, striven for, desired or expected, which aims in no particular direction and yet knows itself capable alike of the possible and the impossible, so unswerving is its power — this state, which is at bottom purposeless and egoless, was called by the Master truly ‘spiritual’.”

Like how drawing well is about seeing the shapes, angles, and negative spaces that you couldn’t before, I think this state of mind is about sensing in different ways. Herrigel learns to sense the way breathing helps pull the bow as well as how to feel the moment of release. When leaving, his master tells him not to write about his progress but to send pictures of how he holds the bow — the master knows exactly what he needs to see. Herrigel speculates, “The [zen master] painter’s instructions might be: spend ten years observing bamboos, become a bamboo yourself, then forget everything and paint.” As I remember, this was discussed in The Talent Code, where students would observe the swing of the tennis coach, and it seems even to work for listening to background music after practice.

From split brains to multiple intelligences

This week I’ve also been reading three things that tie in nicely together: How the Mind Works, Intelligence Reframed, and some of the writings of Bret Victor.

Cognitive scientist Steven Pinker’s How the Mind Works, from 1997, presents, at the offset, a view of the mind that I mostly take for granted now. For one, the brain has a huge amount of infrastructure — recognizing faces, understanding language, and so on — that is essentially pre-programmed through natural selection. It takes environmental stimulus to develop the mind, and certainly brain plasticity allows us to repurpose the mechanisms, these things leading to individuality through the environment, but the big miracle part is in the genes.

The evolutionary perspective applies to human-computer interaction by understanding that humans have evolved with a great ability to do many things, but “using a computer” is not one of them. Instead, computer interfaces have to be designed for what we have. One of main ways that designer Bret Victor applies this is to more greatly enable our use of visual/spatial intelligence. Instead of the drop-down boxes and buttons that are the status quo, he presents some beautiful ways to present information graphically with , which he writes about in Victor, 2006. Another target for Victor is our fetishization of symbolic processing in math and computer science. Like Hadamard, 1954, he exposes that scientists don’t actually do much of that (especially as computers can do more and more), and yet it’s almost exclusively what we teach kids and has somehow become an awkward standard of communication in professional math. In the early stages of his Kill Math project, he is working on interfaces that make math much more tangible.

Finally, Intelligence Reframed of one of Howard Gardner’s books espousing multiple intelligences — a similar idea that the logical/symbolic type of intelligence that society tends to value is just one of many types. I was hoping this would illuminate the split brain concepts a little more, but I don’t think this book is going to explain the brain mechanisms very much. Similar to Pinker, he justifies the existence of the particular intelligences as parts of the brain that were developed through evolution to better equip us for certain tasks. If the theory is true and predictive, I’m not yet sure what it gains us for improving learning. He claims that there are probably not “horizontal faculties” that can cut across many of the intelligences. While I agree that knowledge transfer between domains is difficult[3], I’m not sure I buy this. It eludes to the direct instruction/discovery learning debate where Hmelo-Silver, Duncan & Chinn, 2007 claim that guided discovery practices promote general problem-solving/reasoning skills, while Sweller, Clark & Kirschner, 2010 say, “Show us the skills!”

By next week, I’m hoping to conclude some of this thinking about splits brains and perceptual learning by applying it to a reflection on our way of doing cognitive modeling and tutoring at CMU. I’m just getting into Pinker’s discussion of computational theory of mind and production systems that is directly related.

System updates


Rather than anything complicated for timing, I just switched from 5-minute sprints to a Pomodoro-style 25 minute task for most things as I mentioned last time. This seemed to result in an immediate boost in recorded points, but with travel and other things I haven’t been able to see trends yet.


I started experimenting with Evernote (again) as a capture tool. Evernote and Researchr are complementary in their strengths. Evernote captures the style of full web pages and is better for mobile collection such as taking pictures. With Skitch I can add notes and highlights to screenshots. Researchr is better for taking clips of web articles and PDFs. Not yet sure how to combine them though there’s some potential in making Ruby scripts for Evernote.

A problem with Evernote before was that I’d put cool stuff in there and never look at it again. Now I have a Shuff task for processing things but not exactly sure how to use that yet. During these reflections I can at least go through all notes that were created during the week.

Music (Spotify/YouTube)

I really like this time of year for one reason: best of the year album lists. Spotify makes it really easy to get music (when they have it), but I’ve just now figured out a system to listen to new stuff I add. I simply keep a Playlist folder called “To Listen” and drag in albums as playlists. Unfortunately I can’t sync the whole folder with my iPhone automatically, but it’s not too bad to do each list. I was using a big playlists with a couple songs from each album I wanted to evaluate, but I can’t really get into something just listening to one song and then switching to a potentially totally different genre — this setup is way better.

YouTube has almost everything that Spotify doesn’t but not whole albums. So I just have playlists around types of music. This works really well for things like the music threads in TeamLiquid (over 2000 pages of kpop, wow), where I can just run through the pages of embedded YouTube videos and hit the “+” button.

I get new music on a regular basis with a Shuff task. My favorite site is Rate Your Music, where they keep a running list of best albums of the year, and I shuffle into more typical review sites like Pitchfork and Resident Advisor, forums like We are the Music Makers, and links to the TeamLiquid threads that I’m just working through.

I’ll post my own favorite albums list on Facebook before the end of the year!


  • Came up with a way to structure this post while I was trying to fall asleep. Decided to get up to write it down before I forgot, but it mostly escaped me anyway. Not sure if the fact that it is so heavily on my mind means that it’s actually helping me be more reflective or if it means I should find a better release. Well, maybe I will eventually settle into the more typical blogging-per-idea model, but I’ll Milgram it up and continue the experiment.
  • Took waay too long to write. Wow.
  • I had 11 items recorded for the reflection. A few important things were missing, but it was okay — I had more than enough.

[1] Sounds a lot like deliberate practice
[2] Also voice of the Chessmaster tutorials, which I really wanted to turn into SRS cards
[3] I read Dunbar, 1997 this week, where they found that scientists make many within-domain analogies in their creative reasoning but very few across-domain ones.

Weekly review through December 19

Originally written December 19, 2011.

Why the weekly review?

I had created a new task in Shuff for summarizing the things I was learning or thinking about, but I’ve been skipping it way too much. A lot of those topics are things I’ll be returning to over and over again, so I’d never finish; better to make myself do it weekly. Further, I can do some reflection like looking at data on Shuff. Then later look through all of them and revisit my ideas. This looked really valuable from Sacha Chua’s yearly review. This first one will extend a little beyond one week.

I added a shortcut, ctrl+opt+cmd+R, to add a note on the main reflections page. When I do the weekly review via Shuff, I’ll go through the list and make sure I include everything. Much of it will be links to other wiki pages, where I’ve clipped or taken notes. I’ve been wondering whether to re-introduce Evernote into the cycle as universal capture device.

Wondering if once a week isn’t enough…

Improving Shuff for self-tracking

Speaking of data on Shuff, I want to make it more useful. I can’t actually say whether my behavior has changed significantly since using Shuff because I have no measurements from before.

I realized I have two self-tracking/behavior change philosophies that are not necessarily grounded in good theory, but I’m sticking to them for now. One is to track most directly what you want to increase or decrease. The other is: don’t try to decrease anything. So that makes it simple: track what you want to increase and only that.

The problem with tracking time is that I only want to increase time if the activity is inherently fun. Otherwise, what I really want to increase is Chinese skill, interesting things learned, homework completed, etc. Obviously, these can be harder to track, and there are some more complications that I discussed on the

Another subtly is that I’d also like to increase efficiency, which is calculated both from the above goals tracking and time. So maybe time is good too, especially visualizing the two together. I’ve been arguing with C– about whether direct time or points are better…

Shuff also has the complementary aspect of timeboxing, and I realized that the usually 5 minute thing I don’t doesn’t work for most things. I saw an AJATT post about timeboxing this week — probably an old one — and at the Less Wrong meetup on Friday night, someone was talking about Pomodoro (25 minutes work, 5 minutes break). I used a very poorly executed Pomodoro to power through my final project the last couple days — it was okay. But anyway, I think Shuff would improve with something more sophisticated, especially for tasks that require extended attention.

Another place the Shuff metaphor is breaking down is for things that are more periodic, like this review (and Great Thoughts Time in the next section). Not sure if I want to explicitly schedule that time, but I need some way of making sure I do it. Certainly don’t want to have to keep skipping it six days a week.

Finally, my miscellaneous actions task is terrible!!! Going to be late sending Christmas cards, as always… Disappointing.

Research methods and split brains

I’ve been reading a lot about research methods — general research/thinking skills (Hadamard, 1954, “You and Your Research”), interaction design (Cooper, 2004, just starting at this Intro to HCI), and design for education (thought Soloway, Guzdial & Hay, 1994 was nice, from T–‘s thesis proposal).

And split brains? Well, one of the things Hadamard talks about is what goes on in the conscious and unconscious when mathematicians come up with good ideas. One observation is that words and images cannot co-exist in the mind. Meanwhile, I’ve been reading Drawing on the Right Side of the Brain, which is about exactly that — training yourself to let the right brain take over and work in images rather than symbolism and words. This also seems to relate to some :perceptual learning stuff I’ve been reading, particularly Landy & Goldstone, 2007 — symbolic thought ain’t so symbolic.)


  • Hadamard: “To invent is to choose.”
  • Hamming: Be courageous, knowledge & productivity pay off, Great Thoughts Time: Friday afternoon thinking just about the important problems of the field
  • Cooper: Use personas for your design. Design for their goals not their tasks (reminds me of http://zachholman.com/posts/shit-work/)

Paleo eating

I’ve been cooking from the Primal Blueprint Quick & Easy Meals book. Great so far!

Research ideas!

Thanks to Neal, I’m finally excited about a really concrete research idea! We want to build a web development tutor for people at his library in Mali. The concern is whether its wasteful to teach HTML when drag-and-drop interfaces are potentially much easier. Lots of cool research questions though.

I’m starting to think my vague other ideas will have to wait for thesis work, if they can be developed by then. But I had an inspiration after the enervating final P&T project in which we worked on “materializing energy” (Pierce & Paulos, 2010). I realized a theme of my research ideas is “materializing ideas” and “materializing processes” — in analogy to Hargadon & Sutton, 1997, representing concepts as something you can brainstorm with, storing your memories in some form that can revisited (through exploration or spaced repetition), making programs that get you to do things in a particular process. Still pretty vague, but I’ll see what comes from that.