Last week’s Casual Fridays study was inspired by my annoyance at a website form which required me to constantly switch between typing in information and selecting it from a menu. I wondered if there was really any significant benefit to requiring the use of menus, when typing (for me, anyways) seemed so much faster.
So we developed two versions of the same simple 8-question quiz, one of which required users to alternate between menu-responses and typed responses, and the other which allowed respondents to type in each response. We asked respondents to answer the questions as quickly as possible. About 1,200 people participated. The point of the quiz was to have obvious questions with easy answers, to simulate entering personal information on a form without requiring you to reveal a bunch of actual personal information. For the most part, we succeeded, with about 97 percent accuracy on the quiz responses.
But did the type of quiz affect how quickly respondents answered? It did, a bit:
The respondents using menus took an average 75 seconds to finish the quiz, while the fully-typed responses averaged only 66 seconds. That’s about a 15 percent speed difference, or nine seconds per respondent. Put another way, we wasted about an hour’s worth of our respondents’ time by requiring them to use menus.
On the other hand, I had to spend some time cleaning up the data: One question asked “What country features the Grand Canyon, the White House, Disneyland, and Florida?” Typed responses included “U.S.,” “U.S.A.,” “The United States,” “United States of America,” and “usofa.” I had to convert all those responses to “United States” to match the menu responses. In fact, I spent about an hour fixing up the data, so depending on whose time you value more, the time savings of typing may not have paid off at all.
But even after I sorted through the data to make the typed responses consistent with the menu responses, there was still a difference in error rate:
On this graph, the green bars represent the questions that could be answered with menus (for the “menu” group). The purple bars show the error rate on questions that both groups had to type. Respondents using menus made more than four times fewer errors than those who typed their responses. The menu group was even a little more accurate than the typing group on the questions that everyone had to type.
Overall, the menu group made less than half as many errors as the typing group.
What’s especially interesting to me is the type of errors that were made by the typing group. Of course there were typos, but there were also blunders based on inattention. For the “What country features the Grand Canyon, the White House, Disneyland, and Florida?” question, answers included California, D.C., and Arizona. This clearly results from a misreading of the question as asking about a state rather than a country. For another question we asked “What country features the Eiffel Tower, Paris, and the Louvre?” Quite a few respondents answered “Paris,” again clearly misreading the question. Both of these types of responses simply wouldn’t have been allowed in the menu group, which provided only a list of countries.
Menus also cut down on the number of jokesters responding. For example, one respondent who claimed to be 120 years old finished the survey in just 38 seconds! It might be amusing to me as I compile the survey responses, but I imagine a clerk processing an online purchase would be less amused by pranks like that. By reducing the number of possible responses, menus help tame data sets and generally make life easier for those processing the data.
One aspect of our results is quite relevant for psychology researchers: we see clear evidence here of a speed-accuracy trade-off. The faster respondents were significantly less accurate than slower ones. In many studies measuring reaction times, it’s important to see if there’s a trade-off between speed an accuracy. Often if such a trade-off exists, it nullifies the study results. In this case, that’s clearly part of what’s going on: the slower menu-based respondents were more accurate not only on the menu-based questions, but on typed responses as well. But I don’t think it nullifies the results. Yes, menus do slow you down a little bit, but the reward is a dramatic improvement in accuracy.
One more thing: The study was based on my annoyance with forms that mix menu items and typing, but perhaps not all people share my disdain for menus on forms. Let’s make this a poll: