He was playing with one of this year’s hot Christmas gifts, a digital photo frame from Kodak. It had a wondrous list of features — it could display your pictures, send them to a printer, put on a slide show, play your music — and there was probably no consumer on earth better prepared to put it through its paces.
Dr. Norman, a cognitive scientist who is a professor at Northwestern, has been the maestro of gizmos since publishing “The Design of Everyday Things,” his 1988 critique of VCRs no one could program, doors that couldn’t be opened without instructions and other technologies that seemed designed to drive humans crazy.
Besides writing scholarly analyses of gadgets, Dr. Norman has also been testing and building them for companies like Apple and Hewlett-Packard. One of his consulting gigs involved an early version of this very technology on the shelf at Best Buy: a digital photo frame developed for a startup company that was later acquired by Kodak.
“This is not the frame I designed,” Dr. Norman muttered as he tried to navigate the menu on the screen. “It’s bizarre. You have to look at the front while pushing buttons on the back that you can’t see, but there’s a long row of buttons that all feel the same. Are you expected to memorize them?”
A very nice main-stream article on the problem of bad human factors in consumer products. In retrospect, I am often suprised how tolerant *I* am of bad design/usability.
If the cockpit of a Boeing 747 were as badly designed as some kitchen appliances, most of us would never make it to Denver alive. Imagine a jet pilot having to fumble around for the landing gear lever because it looks just like all the other controls.
Thankfully, truly savvy designers are finally returning to basic ergonomic principles – simple, comprehensible and intuitive controls that can be distinguished by position, shape, color or touch. Now, if only Bosch would hire one of them.
A recent article in Psychological Science on using Google’s PageRank algorithim to explore fluency effects in memory.
Human memory and Internet search engines face a shared computational problem, needing to retrieve stored pieces of information in response to a query. We explored whether they employ similar solutions, testing whether we could predict human performance on a fluency task using PageRank, a component of the Google search engine. In this task, people were shown a letter of the alphabet and asked to name the first word beginning with that letter that came to mind. We show that PageRank, computed on a semantic network constructed from word-association data, outperformed word frequency and the number of words for which a word is named as an associate as a predictor of the words that people produced in this task. We identify two simple process models that could support this apparent correspondence between human memory and Internet search, and relate our results to previous rational models of memory.
Volume 18 (2), December 2007, pages 1069-1076.
The U.S. military has been using games for decades to train its troops. Now, for the first time, the Army has set up a project office, just for building and deploying games.
No, the Army isn’t about to start handing out copies of Halo 3 to troops, TSJOnline.com notes. “I haven’t seen a game built for the entertainment industry that fills a training gap,” said Col. Jack Millar, director of the service’s Training and Doctrine Command’s (TRADOC) Project Office for Gaming, or TPO Gaming. Instead, the new office — part of the Army’s Kansas-based National Simulation Center — will focus on using videogame graphics to make those dull military simulations more realistic, and better-looking.
DETROIT (AP) – A warning on a small tractor that reads «Danger: Avoid Death» was named Wednesday as the United States’ wackiest warning label by an anti-lawsuit group.
All Things Considered interviewed Dr. Peter Pronovost this weekend about the checklist he developed for doctors and nurses in busy hospitals. On a topical level, this illuminated the working memory demands of hospital work and statistics on how easy it is to err.
As an example, a task analysis revealed almost two hundred steps medical professionals do per day to keep the typical patient alive and well. On average, there was a 1% error rate, which equates to about two errors per day, per patient.
Pronovost introduced checklists for each type of interaction, which resulted in Michigan hospitals going from 30% chance of infection (typical across the US) to almost 0% for a particular procedure.
Could something as simple as a checklist be the answer? No, because this intervention wasn’t “just” a checklist.
Whether trained in these areas or not, the doctors interviewed had to understand:
Team training: Nurses are trained not to question doctors, even if they are making a mistake. Solution: Pronovost brought both groups together and told them to expect the nurses to correct the doctors. (Author note: I’d be interested to see how long that works.)
Social interaction: In an ambigous situation, people are less likely to interfere (e.g., the doctor didn’t wash his or her hands, but the nurse saw them washed for the previous patient and thinks “It’s probably still ok.” Checklist solution: eliminate ambiguity through the list.
Effects of expertise: As people become familiar with a task, they may skip steps, especially steps that haven’t shown their usefulness. (e.g., if skipping a certain step never seems to have resulted in an infection, it seems harmless to skip it). Checklist solution: enforce steps for all levels of experience.
Decision making: People tend to use heuristics when in a time-sensitive or fatigued state. Checklist solution: remove the “cookbook” memory demands of medicine, leaving resources free for the creative and important decisions.
Check out this fascinating solution to protecting users from the blade of a table saw.
The way it works is that the saw blade registers electrical contact with human skin and immediately stops. I can’t imagine not having this safety system in place, now that it is available. However, I still have some questions that commenters might want to weigh in on:
1. Unless the system is more redundant than an airplane, it must be able to fail. How do you keep users to remain vigilant when 99.999% of the time there is no penalty for carelessness?
2. To answer my own question, is the fear of a spinning blade strong enough to do that on its own? I know I’m not going to intentionally test the SawStop.
3. Can we use natural fears such as this in other areas of automation?
4. For great insight into human decision making, read this thread on a woodworking site. What would it take to change the mind of this first post-er?
“When do we as adult woodworkers take responsibility and understand the dangers of woodworking. Most accidents happen due to not paying attention to what we’re doing. If we stay focused while we’re using power tools, or even hand tools, we eliminate accidents.”
Some people might say a traffic circle is obvious. There is only one way to go.. who yields might be more difficult, but at least we are all driving in the same direction.
The following two articles come down on the side of experience for the usability of roundabouts.
I am sure the designers believed that if millions of people in London and hundreds of thousands in New Orleans can handle a roundabout, these citizens of a town so small they don’t even bother to mention where it is would do fine.