Introducing Shuff (v3)

I’m happy that I was finally able to fix a major bug today that prompted me to announce (an early version of) Shuff instead of writing the weekly review :) Shuff is at http://shuff.herokuapp.com/

Here it is at a glance. Yep, my stuff is mostly in Chinese, so I’ll walk you through it. (Don’t worry, you can put everything in whatever language you want!)

Basically, Shuff gives you a glance of a bunch of activities to help you choose what do next. When you decide on something, you can focus specifically on that task. Each task has an action list and a timer. Below is my classwork task (the paper are linked to my research wiki, which links to my locally stored PDFs).

Let me go back to the initial view and explain some details. What you saw there is my work context. A context is basically just a way to organize tasks so that you don’t have to filter through as much irrelevant stuff. Each task within a context is can displayed in one of several ways.

The first task, read article, displays a random action from the action list, which is again a link to my research wiki here. The second is clear inbox. This is a value chart that shows the number of messages in my inbox each time I complete the task (which I enter in the button next to the “Finished” button as you can see above). Third is research where I keep various research-related action items. This is a log chart that keeps track of how many times I’ve finished the task over the last two weeks. Finally, I again have the classwork task that just shows my action list.

There are also some cool things on the left sidebar. That colorful chart shows my activity within each context over the last seven days. If you’re familiar with the old Shuff, roles and points are gone, and number of tasks completed in a context is used instead. I feel like I have a little too many context to make much sense of it, so that’s something to work on.

Below the chart is a special context that is always displayed. Just create a context called “all” and it’ll work! In my “all” context I have a qualitative “current mood” status (using the last value display), a randomly suggested playlist (using random action again), and — my favorite of all — a five-item to-do list for the day as suggested to me by Julia (using action list). I generally try to make this list each morning, and it basically just points out actions throughout various tasks that I’d particularly like to finish on that day.

Here’s one more use case that isn’t focused as much on the tasks but is a just a way to aggregate tracked information. It’s tracking my progress in weightlifting (first five, units not disclosed :P) and sleep (last one, in minutes past midnight). Again, the process for tracking these is to click on the task, enter the value next to the Finished button, and click the Finished button. It will record the current time (so no way to enter information in the past for now).

There ought to be many ways you can use this, so I encourage you to explore it yourself. I’m still trying to figure it myself. I’d also be glad to chat with you if you want some advice about setting up. If you decide to try it out, you will undoubtedly encounter problems. A good thing to do is add them on Github issues if they aren’t there already (a better thing to do is fix them and pull request :))

Weekly review through April 17

AERA reflections

Since this is my first conference, I’m hesitant to pass judgment, but I might as well learn what I can from the experience. In short, most of the presentations I saw were not so inspiring. Some of the problems, particularly with technology-related talks, that struck me. I’ll aim to avoid these problems myself.

  1. Boring. Education has a lot of huge, relevant problems. If you did good work, you should have something that is providing evidence towards a problem. And if not that, then there must be something interesting about why it didn’t work. Whatever it is, it should be able to draw me in within the first several seconds of your talk. Instead, most lost me, and I don’t really have much else to say because I didn’t pay any attention.
  2. Research based on memes rather than well-developed theories. Kids learn differently in the 21st century, games are fun, we should be collaborative, etc. In some cases, there may be some strong theory behind it, but the theory is treated entirely superficially. A very common example is to cite a major constructivism paper to justify “kids should be doing X”, where X is anything other than directly receiving instruction. My thinking here is that if you aren’t really trying to build evidence for some particular theory and aren’t directly using some theory to build it, then, well, pretty much the only thing left is you did something that had some intuitively nice results. So just start with those.
  3. Vacuous presentation of statistics. I saw too many “tables of descriptive statistics” flash in a huge unreadable slide. Statistics is a tool used for the interpretation of data, not just some hoop to jump through. So tell us what it helped you interpret.
  4. Research that doesn’t build on anything. This is partly a problem with the community rather than the presenter, but I think everyone should be desperate to be working with a shared knowledge base and vocabulary. I’d really like to avoid this if I’m working on web APIs for learner data, which I’ll talk about shortly.

It wasn’t all bad though:

Evolutionary learning

My favorite talk of the weekend was John Sweller’s talk on De Groot, Geary, and problem solving skills. I’ve thought a lot about Sweller’s papers, such as the provocative “Teaching general problem-solving skills is not a substitute for, or a viable addition to, teaching mathematics” (Sweller, Clark & Kirschner, 2010). However, I’ve never made the connection to De Groot, who was unable to find any difference between experienced versus expert chess players except by the ability to recognize chess boards more easily. Sweller’s claim is that this means that the only difference between the two is entirely based on domain knowledge rather than any problem solving skill. Sweller claims that this transfers broadly because, using Geary’s terminology, problem solving is a biologically primary knowledge that human evolved to do automatically, and therefore will not benefit from being taught.

What we do need to learn is biologically secondary knowledge. This resonates with my personal definition of education.

But OK, what about something like spaced learning, which I claimed last time to robust progress towards better learning? First, learning from spacing is not a skill. It happens naturally. The idea of artificially spacing out content is not a learning or thinking strategy per se but rather an instructional strategy. In other words, it’s a modification of the environment that we wish to naturally learn in (see the blog’s tag line again). Think of it this way: things that we see on a spaced schedule are things that are worth memorizing — with less spacing, we might as well just rely on the environment to store the information; with more spacing, they’re probably not that important. That also explains why most people do well enough with some kind of immersion learning.

So, I think it’s important to realize that Sweller’s idea here is about skills for learning, which are a small part of our lives, even in the context of education. That is what I argued last week in a comment to Dan Meyer’s blog post about Sweller. I totally agree that motivating students is a major part of the equation. I think another interesting area to investigate is how this biologically secondary knowledge comes into being, when it is necessary, and how it gets disseminated. Or at least, that’s how I’m justifying my continued interest in philosophy of science, bringing us to…

Researchr and the future of science

Stian Håklev is now doing weekly reviews as well! Stian is the creator of the researchr, the tool that, along other things, produces all of the reference links you see in these blog posts. As he mentioned we met at AERA and talked a bit about what’s next with researchr and the web-based “scrobbling” service I’ve been working on. I was going to talk more about this, but I’m going over time for now. Sunday maybe.

Web API for learning data

I don’t know how this should look, but now really seems like a great time to do this:

  • Several people have contacted me about my Quantified Self blog post about this topic, seeking to exchange more info about using learning analytics on the web. As I mentioned last time, there’s a whole conference for this stuff!
  • Several talks at AERA that seem to be reinventing student models.
  • Khan Academy continues to reinvent student models. (See here and here)
  • OpenStudy announced today its new SmartScore, an assessment tool described as “a snapshot of a student’s high-performing skills applied to learning and development that delivered results to the learning community”. (Apparently announced at the Educational Innovation Summit, which depressingly overlaps with the educational research conference.)
  • I just found this company that seems to be doing something for accessing LMS data. I don’t know how useful that data gets, but anything to get that stuff out of those terrible silos!
  • Mozilla’s Open Badges are supposed to provide “recognition for skills and achievements that happen online or out of school.” They propose a technical solution to verifying credentials of badge issurers, but there is obviously an equally important issue of what kind of badges are useful, how to assess them, etc.
  • The NSF is really interested in big data, especially related to education. So is the Department of Education (the original article cites the Lifelong Learning Record from Microsoft, which I didn’t find too much more about except for this). Apparently the Department of Defense has been onboard since 1997 with their Advanced Distributed Learning initiative.

The problem is that representing this data is hard.

  1. It’s hard for researchers because it’s not clear what to capture in every domain. One could try capturing every mouse click and key press, but that data may get too big for much passing around. Even then, you may have to state. My colleague Erik is planning to work on this problem, but it definitely seems dissertation-worthy or beyond.
  2. It’s hard for use by instructors or learners because it’s not clear what is beneficial to display. This is the nature of my personal informatics proposal, but I don’t even have much of a starting point: most of the self-studiers I’ve interviewed for research are not really using existing data.

Nevertheless, it seems like an easy target to at least start making some progress toward.

Weekly review through April 8

Optimized learning

Coincidentally, both of my courses covered memory this week. To what extent is memory important for learning? In the IES guide Organizing Instruction and Study to Improve Student Learning, the authors claim “It also reflects our central organizing principle that learning depends upon memory, and that memory of skills and concepts can be strengthened by relatively concrete—and in some cases quite nonobvious strategies” (Pashler et al., 2007). I would argue similarly, that all learning involves memory, and that a memory can vary in its flexibility, which is really the difference between shallow and deep learning.

One of the key principles of memory for learning is creating desirable difficulties (see Bjork & Bjork, 2011). The basic idea is that if we have easy retrieval access of a memory — like having the answer right in front of us — then we won’t create strong retrieval paths to the stored memory. Desirable difficulties can be achieved by spacing, by obscuring the thing to be learned (foot -> s–e instead of foot -> shoe), etc. So more difficult is better, and the limit on “how difficult?” is cognitive load or merely an inability to comprehend what’s going on (foot -> —-).

Spaced repetition systems are really good for desirable difficulties via spacing. And so it seems that, finally, my preoccupation there is justified by science!

But moving beyond “How do we learn what we want to learn?”, another question is “What do we want to learn?” When people talk about how education needs to be updated for the modern world, I think this must be the most important question. And it’s not a question that needs to be answered, it’s one that needs to be constantly asked. For instance, this week I started learning Backbone.js, which was released in 2011, I believe. So that could not have been an answer to the question just over a year ago.

The question most people ask in school is instead “Why do we need to learn this?” and educators tend to give unsatisfactory answers. It might be better if we can find a way to enable that question to be answered by starting with “What do I want to learn?” As in: What do I want to learn? How to make cool websites. How do I do that? Learn [today’s hip development framework]. How do I understand all this? Learn the basics that we wanted to force you to learn from the start.

Because there are countless different answers to the question — and that’s a good thing — it points to a clear need for personalized learning. Not just self-paced, not personalized badges, but a finely tailored answer to the “What do I want to learn?” question.

I have an intuition that one small way to support this question is by helping people keep track of their learning progress for arbitrary topics. The thing is that people don’t really do this spontaneously. Sometimes there is a minor feature to mark one’s progress, such as an indicator of which lectures have been watched in an online course, or a flashcard program will tell you how many total things you’ve learned. But those are answering questions about “What have I done in this tool?” and not “What have I really learned?” and there isn’t really an attempt to relate those.

Anyway, I wrote a blog post on this topic for Quantified Self: Personal Informatics for Self-Regulated Learning. Apparently, this is not as unexplored an area as I thought, there is a whole conference, Learning Analytics, along these lines.

What are some of the ingredients besides self-tracking? Some thoughts offered without proof: organizing knowledge, applying science/rationality methods to assess validity of claims, supporting spontaneous communities (StackExchange or even forum/chatroom/wiki software generally), rapidly creating assessments (which may be inherent in the desired activity, as in programming). Ok, this is the point where I realize I’m just talking about “everything” and move on.

Productivity stuff

  • I (prematurely?) released the new version of Shuff! I should be putting out more info about that soon. One cool suggestion from a friend was to make a to-do list of around five items for the day to definitely do, so I’m trying that to combat my well-documented struggle with keeping down my to-do list. Five seems like a good number.
  • Discovering new music: After using one huge “new songs” playlist, which I found unsatisfying because there was not enough familiar stuff and too many shifts in style, and then trying full new albums, which got overwhelming, I’m going to try themed playlists (on Spotify when possible; otherwise Grooveshark or Youtube). I’m also listening to Pandora occasionally go to more ideas for populating the playlists. The new version of Shuff gives me a decent manage all this. :) To be explained soon…