Decision making is hard. For instance, we seem to be awful at making hiring decisions. Daniel Willingham explains a study that accurate answers to interviewers’ questions did not gain any advantage over random information. Google has examined the data in practice and found that structured interviews with a rubric are more effective than brainteasers. A particular example from marketing professor Adam Alter is particularly offensive: people with easier names are more readily promoted.
We like easy names. That’s a clear picture of how we are biased, emotional, and have limited processing capacity.
But on the other side of the coin, trusting decisions to computers, has its own subtle set of problems, as examined in two recent articles.
Nicholas Carr tells us that “All Can Be Lost” when we put knowledge in the hands of a machine. While computers automation in, for example, flying planes may initially seem safer and more effective, human operators meanwhile begin to lose their skills.
Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated.
And at this point the human operator is no longer capable of taking over. It’s a race against pilots losing their knowledge and technology advancing its knowledge and robustness. In this case, technology seems to be winning: air travel is already much safer than car travel and been getting increasingly safe.
What about computers as actors in complex systems? In The Real Privacy Problem Evgeny Morozov makes several points about the inadequancy of technological solutions to protecting our privacy. One in particular is we may not be able to interpret the decisions or predictions of machines. This undermines our legal and political systems that are rooted in deliberation through natural language.1
My understanding of politics is limited, but there’s an analogue in educational technology. In adaptive learning systems, computer models are used to make predictions and assist learners based on their performance. Similarly, there is backlash, such as Dan Meyer’s, that machines may be able to determine that an answer is incorrect, but they aren’t able to connect to the human mind making that mistake the way a teacher can. There are tools such as teacher dashboards, as the blogger from Pearson proposes to Meyer, or open learner models that expose the computer’s knowledge of the student such that the student can scrutinize their own model. As Meyer correctly notes, however, designing a tool that’s actually useful entails its own difficulties.
What can we conclude?
- Inevitably, more and more of our knowledge will be in the hands of computers. We can’t just hope this won’t happen.
- We must understand and codify how humans learn, that we are biased, emotional, and have limited processing capacity, but can learn complex patterns when given accurate feedback. With that knowledge, computers will be able to teach their results to humans. Morozov links to the concept of adversial design, where technology can “provoke and engage the political.”
- Whether decisions are made by humans or machines or some system of both, the impact of single decisions should be kept as small as possible. Mistakes can thus minimize damage and even support growth. One failing flight (out of 30,000 per day in the US)–as tragic as it may be–can teach us a great deal. (This is Taleb’s idea of antifragility.)
- None of this will be easy. That’s why you should hire me.
 As contrasted to computation. See again The Mother of All Disruptions.