Tag Archives: errors

Tag-based interfaces and Aging

I was recently interviewed by our campus news service about receiving a Google Research Award to study information retrieval and aging. The research involves designing information retrieval interfaces around the capabilities and limitations of older adults (those age 60 and above). Here is a snippet from the press release:

Richard Pak, an assistant professor of psychology, has received a $50,000 gift from Google to study how older adults navigate the Web and what Web site design features make searches easier. The grant will fund an extension of his research on aging and technology.

“The findings are that when you take a Web site and organize it hierarchically — like how you might organize your documents on your computer with folders within folders — older adults are much slower and make more errors when they are searching for information compared to younger adults,” Pak said. “We think that this is the case because the situation does not allow older adults to use their greater knowledge toward the situation. However, when you take that same Web site and organize it around keywords or concepts instead of folders, older adults are able to bring their wealth of general knowledge to the situation and perform almost equivalently to younger adults in the task.”

That is, older adults seem to perform better using so-called “tag-based sites,” which are Web sites that organize their information around frequently used keywords. Pak said that while tag-based sites are still relatively new, several popular sites use tags. These include Amazon.com, Gmail.com, and the photo sharing Web site Flickr.com.

The recently published paper, “Designing an information search interface for younger and older adults” appears in the latest issue of the journal Human Factors.

This Human Factors Problem Should Be History By Now

An old problem has hit the news again: chemicals that look too much like drinks. This story just came out in New Jersey, where six people drank tiki-torch fuel the color of apple juice.

It is an interesting problem… what SHOULD you make fuel look like? The tiki-torch fuel bottle did not fall into the Fabuloso problem of looking like a sports drink bottle*. It wasn’t being kept under the bar, like the caustic dish liquid that scarred a father and daughter in “Set Phasers on Stun.” It doesn’t taste sweet like anti-freeze.

Yet when six people all make the same mistake, in a short time span, in a wide geographic spread, are of different ages, etc., it’s safe to assume something triggered them to think it was drinkable. As the executive director of the New Jersey Poison Information and Education System said in the article from the Star-Ledger:

“During my 40 years in medicine, you get an occasional kid who ingests kerosene, but I have never seen this kind of cluster,” he said.”

But what was it? From looking at the bottle, I don’t have a good answer.

*Fabuloso refused to change their bottle shape but made the concession of adding a child-proof cap. As the other stories show, it’s not just kids making these errors, but the cap should at least make the user think as they try to open the “sports drink.”**

**Of course that reminds me of the time I bought a new contact lens solution and opened it to see a bright red bottle tip. “What a neat retro-looking design,” I thought before filling the contact and putting it in my eye. An hour of rinsing later, I still thought maybe I’d blinded myself.

Trust in Automation

I’ve heard a great deal about trust and automation over the years, but this has to be my favorite new example of over-reliance on a system.

GPS routed bus under bridge, company says
“The driver of the bus carrying the Garfield High School girls softball team that hit a brick and concrete footbridge was using a GPS navigation system that routed the tall bus under the 9-foot bridge, the charter company’s president said Thursday.Steve Abegg, president of Journey Lines in Lynnwood, said the off-the-shelf navigation unit had settings for car, motorcycle, bus or truck. Although the unit was set for a bus, it chose a route through the Washington Park Arboretum that did not provide enough clearance for the nearly 12-foot-high vehicle, Abegg said. The driver told police he did not see the flashing lights or yellow sign posting the bridge height.

“We haven’t really had serious problems with anything, but here it’s presented a problem that we didn’t consider,” Abegg said of the GPS unit. “We just thought it would be a safe route because, why else would they have a selection for a bus?””

Link to original story (with pictures of sheared bus and bridge)

Indeed, why WOULD “they” have a selection for a bus? Here is an excerpt from the manual (Disclosure: I am assuming it’s the same model):

Calculate Routes for – Lets you take full advantage of the routing information built in the City Navigator maps. Some roads have vehicle-based restrictions. For example, a street or gate may be accessible by emergency vehicles only, or a residential street may not allow commercial trucking traffic. By specifying which vehicle type you are driving, you can avoid being routed through an area that is prohibited for your type of vehicle. Likewise, the ******** III may give you access to roads or turns that wouldn’t be available to normal traffic. The following options are available:

  • Car/Motorcycle
  • Truck (large semi-tractor/trailer
  • Bus
  • Emergency (ambulance, fire department, police, etc.)
  • Taxi
  • Delivery (delivery vehicles)
  • Bicycle (avoids routing through interstates and major highways)
  • Pedestrian”

gps-screen.gif

If we can assume no automation can be 100% reliable, at what point to people put too much trust in the system? At what point do they ignore the system in favor of more difficult methods, such as a paper map?At what point is a system so misleading that it should not be offered at all? Sanchez (2006) addressed this question and related type and timing of error to amount of trust placed in the automation. Trust declined sharply (for a time) after an error, so we may assume the Seattle driver might have re-checked the route manually had other (less catastrophic) errors occurred in the past.*

The spokesman for the GPS company is quoted in the above article as stating:

“Stoplights aren’t in our databases, either, but you’re still expected to stop for stoplights.”

I didn’t read the whole manual, but I’m pretty sure it doesn’t say the GPS would warn you of stoplights, a closer analogy to the actual feature that contributed to the accident. This is a time where an apology and a promise of re-design might serve the company better than blaming their users.

*Not a good strategy for preventing accidents!

Other sources for information on trust and reliability of automated systems:

Lee, J.D. & See, K.A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46, 50-80.

Parasuraman, R. & Riley, V. (1997). Humans and automation: use, misuse, disuse, abuse. Human Factors, 39, 230-253.

Wiegmann, D. A., Rich, A., Zhang, H. (2001). Automated diagnostic aids: the effects of aid reliability on users’ trust and reliance. Theoretical Issues in Ergonomics Science, 2(4), 352-367.

The Cognitive Engineering Laboratory

John Wayne, United Airways, and Human Factors

Most everyone probably heard about the gun accidentally fired in the passenger plan cockpit last week.

But did you hear about the designs that lead to this human error?

I had to do some detective work (and quizzing gun owners) to find the following pictures:

Here is the gun in question (or similar enough) showing the safety and the spaces in front of and behind the trigger.

pilotgun.jpg

Pilots keep the gun in a hoster (see below).

Users report some difficulty ascertaining whether the gun is “locked” into the holster. If it is not, then the trigger can be in an unexpected place (namely, higher in the holster than the shaped holster seems to indicate.)

The TSA requires pilots who have been issued these guns to padlock the trigger for every takeoff and landing. Reports are that pilots do this about 10 times for a shift. Therefore, let’s assume we have 10 chances for error in using the holster and in using the padlock.

tsaholster.jpg

The padlock goes through the trigger. It should go behind, to keep anyone from pulling the trigger. If the gun is 100% in the holster, this is the case. If it is not… then the padlock can end up in FRONT of the trigger. The opaque holster prevents a visual check of the trigger.

The holster also prevents a visual check of the safety.

All of this might be forgiven, or addressed with training, if it weren’t for the fact that there are numerous other ways to prevent a gun from firing rather than locking something through the trigger. Remember, we should be on the “Guard” step of “Design out, Guard, then Train.”

I’m not even going to discuss whether pilots should have guns.

“Boyd said he supports the program to arm pilots, saying, “if somebody who has the ability to fly a 747 across the Pacific wants a gun, you give it to them.”

For an amusing take, see “Trust is not Transitive.”

Response to “Paper Kills”

I was reading a lengthy Q&A with Newt Gingrich in Freakonomics this morning, and came across the following:

Q: You discuss a united American front in your book. What healthcare platforms do you think Americans will unite around?

A: “… This system will have three characteristics, none of which are present in today’s system…. It will make use of information technology. Paper kills. It’s just that simple. With as many as 98,000 Americans dying as a result of medical errors in hospitals every year, ridding the system of paper-based records and quickly adopting health information technology would save lives and save money. We must also move toward e-prescribing to drastically reduce prescription errors.

Newt Gingrich is a powerful man. I am glad he is comfortable with and encouraging of technology. Me too! However, I am terrified of the assumption that information technology systems are inherently better or less error prone than paper systems. “Paper kills” is a nice, tight tag line that people are bound to remember. Is it true?

My earlier post on Paper Protocols saving lives and dollars in Michigan says otherwise. So does research in the context of medical adherence. Linda Liu and Denise Park (2004) identified a paper system as one of the most effective tested when it comes to diabetics remembering to measure their glucose.

It is not the material of the system, it is the design of the system that makes it either intuitive, fail-safe, or error prone. Blindly replacing known paper protocols and records with electronic alternatives is not a guaranteed improvement. This is the kind of thinking that brought us the touchscreen voting system.*

“Oh, it wouldn’t be blind,” one might say. I hope so, but a blanket statement such as “paper kills” doesn’t give me confidence. Paper doesn’t kill, bad design does.

I wouldn’t want to end this post without being clear: We need to stop pitting paper against computers and start solving:

1. Under what circumstances each is better

2. Why each would be better

3. How to best design for each. Paper isn’t going away, folks.

 

*The linked article mentions reliability and security without mentioning usability. I don’t want to go too far afield, so I will save my post on being unable to vote on the Georgia Flag (thanks to the compression artifacts present in the pictures, making it impossible to tell them apart.)

References:

Liu, L. L., & Park, D. C. (2004). Aging and Medical Adherence: The Use of Automatic Processes to Achieve Effortful Things. Psychology and Aging, 19(2), 318-325.

 

The Double-Bubble Ballot

U.S. news agencies are reporting on the California ballots that ‘may have lost Obama the California primary.’ The argument is that he would have pulled in the ‘declined to state’ voters (those who have not registered as either Democrat or Republican), but that because of a human factors error with the ballot, those votes may not have been counted. (The inference is that these voters would have supported Obama.)

Succinctly, declined-to-state voters have to ask for a Democratic ballot. Then they must fill in a bubble at the top of the ballot, saying that they wanted to vote in the Democratic primary. Obviously, many users might not do this, as it seems a redudant code… the ballot they are holding is the Democratic ballot, so why indicate again that it was the ballot they requested? If you look at the ballot below, it says at the top to “select party in the box below.” Of course, there is only one option, which makes it not much of a selection.

ballot.jpg

It’s likely this area of the ballot was inserted to produce some interesting statistical information (rather than a pure answer of who received the most votes.) If only declined-to-state voters filled the bubble, you could get a count of how many of those voters came out to vote compared to other years, how many chose to vote Democrat, and which candidate received most of their support. While interesting (I would like to know all of those things) it complicates the purpose of primary voting: to count the number of Americans who support a particular candidate.

Why I am not a conspiracy theorist: People with the best of intentions make critical human factors design errors, even errors that cost people their lives (see “Set Phasers on Stun.”) Sometimes, these errors are created by specific good intentions, as in the Florida hanging-chad fiasco.

ballot2.jpg

The reason the choices were staggard on each side of the ballot was to increase the font size, supposedly making the ballot more clear for older voters. This perceptual aid was trumped by the resulting cognitive confusions. These ballot designs may suffer from a lack of user testing, but not from an intentional ploy to keep declined-to-state voters from being counted or to get Pat Buchanan more votes.

Thus, let’s tackle the problem rather than using ‘double bubble’ for a slow news day. Why don’t we demand all ballots and voting machines be user tested? (Security is another issue, for another blog.) If you have an idea of what action to take, please comment so a future post may provide detailed instructions.

NPR covers a good bit of the HF field in one conversation with two doctors

All Things Considered interviewed Dr. Peter Pronovost this weekend about the checklist he developed for doctors and nurses in busy hospitals. On a topical level, this illuminated the working memory demands of hospital work and statistics on how easy it is to err.

As an example, a task analysis revealed almost two hundred steps medical professionals do per day to keep the typical patient alive and well. On average, there was a 1% error rate, which equates to about two errors per day, per patient.

Pronovost introduced checklists for each type of interaction, which resulted in Michigan hospitals going from 30% chance of infection (typical across the US) to almost 0% for a particular procedure.

Could something as simple as a checklist be the answer? No, because this intervention wasn’t “just” a checklist.

Whether trained in these areas or not, the doctors interviewed had to understand:

Team training: Nurses are trained not to question doctors, even if they are making a mistake. Solution: Pronovost brought both groups together and told them to expect the nurses to correct the doctors. (Author note: I’d be interested to see how long that works.)

Social interaction: In an ambigous situation, people are less likely to interfere (e.g., the doctor didn’t wash his or her hands, but the nurse saw them washed for the previous patient and thinks “It’s probably still ok.” Checklist solution: eliminate ambiguity through the list.

Effects of expertise: As people become familiar with a task, they may skip steps, especially steps that haven’t shown their usefulness. (e.g., if skipping a certain step never seems to have resulted in an infection, it seems harmless to skip it). Checklist solution: enforce steps for all levels of experience.

Decision making: People tend to use heuristics when in a time-sensitive or fatigued state. Checklist solution: remove the “cookbook” memory demands of medicine, leaving resources free for the creative and important decisions.

More medical errors–Operating on the wrong side of the patient’s brain!

There sure seem to be lots of medical errors in the news lately. No mention of human factors:

The most recent case happened Friday when, according to the health department, the chief resident started brain surgery on the wrong side of an 82-year-old patient’s head. The patient was OK, the health department and hospital said.

In February, a different doctor performed neurosurgery on the wrong side of another patient’s head, said Andrea Bagnall-Degos, a health department spokeswoman. That patient was also OK, she said.

But in August, a patient died a few weeks after a third doctor performed brain surgery on the wrong side of his head. That surgery prompted the state to order the hospital to take a series of steps to ensure such a mistake would not happen again, including an independent review of its neurosurgery practices and better verification from doctors of surgery plans.

We can surmise from the short news article that the source of the problem seems to be working memory??

In addition to the fine, the state ordered the hospital to develop a neurosurgery checklist that includes information about the location of the surgery and a patient’s medical history, and to put in place a plan to train staff on the new checklist.

[link]

Patient record mistakes

LOS ANGELES – The recent chatter on a popular social networking site dealt with a problem often overlooked in medicine: mistakes in patients’ medical charts.The twist was the patients were doctors irked to discover gaffes in their own records and sloppy note-taking among their fellow physicians.

Errors can creep into medical charts in various ways. Doctors are often under time pressure and may find themselves taking shortcuts or not fully listening to a patient’s problems. Others rely on their memory to update their patients’ files at the end of the day. Other mistakes can arise from illegible handwriting or coding problems.

<link>

“From the Doctor’s Brain to the Patient’s Vein”

It appears that HFB needs an entire section devoted to medical error. This is not surprising in light of the thousands of Americans who die from preventable errors each year.

The latest comes from Tanzenia where confusion about patient names earned brain surgery for a twisted knee, and knee surgery for a migraine sufferer.

Mr Didas who had been admitted for a knee operation after a motorbike accident is still recovering from the ordeal – he ended up unconscious in intensive care after his head was wrongly operated on. And chronic migraine sufferer Emmanuel Mgaya is likewise, still recovering from his unplanned knee surgery. The blunder was blamed on both patients having the same first name.
But a hospital official, Juma Mkwawa said it was the worst scandal that had happened at Muhimbili hospital and that, “sharing a first name cannot be an excuse”. The two surgeons responsible have been suspended. (BBC)

Before anyone retreats into the comfort of “that wouldn’t happen here,” I suggest a look at the growing literature on similar medication names and their consequences.

It is easy to be the bearer of sad stories and ill tidings. I would rather on a note for a hopeful future. Below are researchers and companies dedicated to identifying and eliminating causes of medical error.

Please add more in the comments section if you know someone working in this important context.