All posts by Anne McLaughlin

Associate Professor, Department of Psychology, North Carolina State University, Raleigh, NC

John Wayne, United Airways, and Human Factors

Most everyone probably heard about the gun accidentally fired in the passenger plan cockpit last week.

But did you hear about the designs that lead to this human error?

I had to do some detective work (and quizzing gun owners) to find the following pictures:

Here is the gun in question (or similar enough) showing the safety and the spaces in front of and behind the trigger.

pilotgun.jpg

Pilots keep the gun in a hoster (see below).

Users report some difficulty ascertaining whether the gun is “locked” into the holster. If it is not, then the trigger can be in an unexpected place (namely, higher in the holster than the shaped holster seems to indicate.)

The TSA requires pilots who have been issued these guns to padlock the trigger for every takeoff and landing. Reports are that pilots do this about 10 times for a shift. Therefore, let’s assume we have 10 chances for error in using the holster and in using the padlock.

tsaholster.jpg

The padlock goes through the trigger. It should go behind, to keep anyone from pulling the trigger. If the gun is 100% in the holster, this is the case. If it is not… then the padlock can end up in FRONT of the trigger. The opaque holster prevents a visual check of the trigger.

The holster also prevents a visual check of the safety.

All of this might be forgiven, or addressed with training, if it weren’t for the fact that there are numerous other ways to prevent a gun from firing rather than locking something through the trigger. Remember, we should be on the “Guard” step of “Design out, Guard, then Train.”

I’m not even going to discuss whether pilots should have guns.

“Boyd said he supports the program to arm pilots, saying, “if somebody who has the ability to fly a 747 across the Pacific wants a gun, you give it to them.”

For an amusing take, see “Trust is not Transitive.”

Response to “Paper Kills”

I was reading a lengthy Q&A with Newt Gingrich in Freakonomics this morning, and came across the following:

Q: You discuss a united American front in your book. What healthcare platforms do you think Americans will unite around?

A: “… This system will have three characteristics, none of which are present in today’s system…. It will make use of information technology. Paper kills. It’s just that simple. With as many as 98,000 Americans dying as a result of medical errors in hospitals every year, ridding the system of paper-based records and quickly adopting health information technology would save lives and save money. We must also move toward e-prescribing to drastically reduce prescription errors.

Newt Gingrich is a powerful man. I am glad he is comfortable with and encouraging of technology. Me too! However, I am terrified of the assumption that information technology systems are inherently better or less error prone than paper systems. “Paper kills” is a nice, tight tag line that people are bound to remember. Is it true?

My earlier post on Paper Protocols saving lives and dollars in Michigan says otherwise. So does research in the context of medical adherence. Linda Liu and Denise Park (2004) identified a paper system as one of the most effective tested when it comes to diabetics remembering to measure their glucose.

It is not the material of the system, it is the design of the system that makes it either intuitive, fail-safe, or error prone. Blindly replacing known paper protocols and records with electronic alternatives is not a guaranteed improvement. This is the kind of thinking that brought us the touchscreen voting system.*

“Oh, it wouldn’t be blind,” one might say. I hope so, but a blanket statement such as “paper kills” doesn’t give me confidence. Paper doesn’t kill, bad design does.

I wouldn’t want to end this post without being clear: We need to stop pitting paper against computers and start solving:

1. Under what circumstances each is better

2. Why each would be better

3. How to best design for each. Paper isn’t going away, folks.

 

*The linked article mentions reliability and security without mentioning usability. I don’t want to go too far afield, so I will save my post on being unable to vote on the Georgia Flag (thanks to the compression artifacts present in the pictures, making it impossible to tell them apart.)

References:

Liu, L. L., & Park, D. C. (2004). Aging and Medical Adherence: The Use of Automatic Processes to Achieve Effortful Things. Psychology and Aging, 19(2), 318-325.

 

Death from Branding

If you’re Apple, you want people to see the similarities between their iPod and their iPhone. However, if you are a drug manufacturer, you do not want similarities between adult and pediatric medicine.

Above are bottles of Heparin, manufactured by Baxter Healthcare. Both blood thinners, one of these vials is 1,000 times more concentrated than the other. Confusion between these two bottles killed infants at an Indiana hospital back in 2002. This article provides a good overview of past cases.

I actually remember reading about this back then, and thought “Wow, there’s a good human factors lesson. How awful that children had to die to bring it into the spotlight.”

Unfortunately, this lesson stayed unlearned, as two more children were administered the adult drug this week. Because these were the newborn twins of Dennis and Kimberly Quaid, who have already spoken out on 60 Minutes about medication mistakes, we may see the problem addressed more thoroughly in the drug industry.

On a final note, these cases touch on the human desire to blame other humans rather than the systems they interact with. In the Indiana case, a mother who lost her child was quoted as saying:

… who blames the nurses, not drug labeling, for her daughter’s death. “I don’t think it was from the label,” she said. “They are both blue, but one is lighter than the other. How could they mistake those?”

Change blindness, automaticity, expectation, fatigue, and time pressure are but a few of the factors that might have caused the error. Sometimes, it isn’t a case of someone not just “being careful.” This is actually a good thing: we can understand and solve human factors problems. We can’t make someone care.

Unusually quiet morning radio show

What if a Radio DJ hosted a morning show and no one heard?

Lesson learned! I will try to make certain to hit ‘publish’ at the end of this post.

From the article:

“”I’ve been doing the show three days a week for 10 months and always pressed the button at the right moment. Goodness knows why I forgot this time.

“Mr Dixon, the station’s only employee, will not fire his “excellent” breakfast show DJ, who is one of 35 volunteers who have learnt their radio skills from scratch.”

The Double-Bubble Ballot

U.S. news agencies are reporting on the California ballots that ‘may have lost Obama the California primary.’ The argument is that he would have pulled in the ‘declined to state’ voters (those who have not registered as either Democrat or Republican), but that because of a human factors error with the ballot, those votes may not have been counted. (The inference is that these voters would have supported Obama.)

Succinctly, declined-to-state voters have to ask for a Democratic ballot. Then they must fill in a bubble at the top of the ballot, saying that they wanted to vote in the Democratic primary. Obviously, many users might not do this, as it seems a redudant code… the ballot they are holding is the Democratic ballot, so why indicate again that it was the ballot they requested? If you look at the ballot below, it says at the top to “select party in the box below.” Of course, there is only one option, which makes it not much of a selection.

ballot.jpg

It’s likely this area of the ballot was inserted to produce some interesting statistical information (rather than a pure answer of who received the most votes.) If only declined-to-state voters filled the bubble, you could get a count of how many of those voters came out to vote compared to other years, how many chose to vote Democrat, and which candidate received most of their support. While interesting (I would like to know all of those things) it complicates the purpose of primary voting: to count the number of Americans who support a particular candidate.

Why I am not a conspiracy theorist: People with the best of intentions make critical human factors design errors, even errors that cost people their lives (see “Set Phasers on Stun.”) Sometimes, these errors are created by specific good intentions, as in the Florida hanging-chad fiasco.

ballot2.jpg

The reason the choices were staggard on each side of the ballot was to increase the font size, supposedly making the ballot more clear for older voters. This perceptual aid was trumped by the resulting cognitive confusions. These ballot designs may suffer from a lack of user testing, but not from an intentional ploy to keep declined-to-state voters from being counted or to get Pat Buchanan more votes.

Thus, let’s tackle the problem rather than using ‘double bubble’ for a slow news day. Why don’t we demand all ballots and voting machines be user tested? (Security is another issue, for another blog.) If you have an idea of what action to take, please comment so a future post may provide detailed instructions.

Welcoming the Fireproof Elevator

fire.jpg

NPR ran a story earlier this week on an intriguing new human factors problem: fire-safe elevators.

The fall of the World Trade Center made it painfully obvious that stairs in skyscrapers do not function adequately in emergencies. We’ve always been warned away from elevators in case of fire, and I would go so far as to say it part of our collective knowledge from a young age. With the advent of elevators you should use in a fire comes a host of difficulties.

1. Training the zeitgeist: Not all elevators will be replaced, though new tall buildings will all have fireproof elevators. There may be new rules requiring older buildings over a certain size retrofit at least one elevator as fire safe.

  • This still makes fireproof elevators the exception instead of the rule. A great research question would be how to train people for a small-percentage case? You want the public, of all ages and experience levels, to know “In case of fire, use stairs, unless there is a fireproof elevator around, which you may or may not have noticed while you were in the building.”

2. Warnings and Information: The symbol in this post is probably familiar to all of you. I’ve occasionally seen it in Spanish, but not often. How will we indicate the difference between fire-safe elevators and other elevators?

  • Decals, signs and other indicators will not only have to indicate which elevators are safe and their purpose, but whether other elevators in the building are safe or unsafe. My building is square, with elevators on mirrored sides. If one were safe and the other not, I am sure I could remember which was safe, especially under the cognitive demands of an emergency.

3. Wayfinding and luck: Use of the elevator may depend on the location of the fire.

  • One of the original problems was that elevators opened onto smoke-filled or fire-filled floors. The story did not specify how the new elevators would avoid this. If there is a sensor that prevents them from opening onto such a floor, what if there are people desperately waiting for the elevator on that floor (as they have been re-trained to do)?
  • Should the system be even more complex, with people gathering on certain floors to await the elevator rescue? And then, if those floors are on fire..

In short, researchers start your engines! We have some training, warning, design, and way-finding work to do.

NPR covers a good bit of the HF field in one conversation with two doctors

All Things Considered interviewed Dr. Peter Pronovost this weekend about the checklist he developed for doctors and nurses in busy hospitals. On a topical level, this illuminated the working memory demands of hospital work and statistics on how easy it is to err.

As an example, a task analysis revealed almost two hundred steps medical professionals do per day to keep the typical patient alive and well. On average, there was a 1% error rate, which equates to about two errors per day, per patient.

Pronovost introduced checklists for each type of interaction, which resulted in Michigan hospitals going from 30% chance of infection (typical across the US) to almost 0% for a particular procedure.

Could something as simple as a checklist be the answer? No, because this intervention wasn’t “just” a checklist.

Whether trained in these areas or not, the doctors interviewed had to understand:

Team training: Nurses are trained not to question doctors, even if they are making a mistake. Solution: Pronovost brought both groups together and told them to expect the nurses to correct the doctors. (Author note: I’d be interested to see how long that works.)

Social interaction: In an ambigous situation, people are less likely to interfere (e.g., the doctor didn’t wash his or her hands, but the nurse saw them washed for the previous patient and thinks “It’s probably still ok.” Checklist solution: eliminate ambiguity through the list.

Effects of expertise: As people become familiar with a task, they may skip steps, especially steps that haven’t shown their usefulness. (e.g., if skipping a certain step never seems to have resulted in an infection, it seems harmless to skip it). Checklist solution: enforce steps for all levels of experience.

Decision making: People tend to use heuristics when in a time-sensitive or fatigued state. Checklist solution: remove the “cookbook” memory demands of medicine, leaving resources free for the creative and important decisions.

Design out, Guard, then Warn

Check out this fascinating solution to protecting users from the blade of a table saw.

The way it works is that the saw blade registers electrical contact with human skin and immediately stops. I can’t imagine not having this safety system in place, now that it is available. However, I still have some questions that commenters might want to weigh in on:

1. Unless the system is more redundant than an airplane, it must be able to fail. How do you keep users to remain vigilant when 99.999% of the time there is no penalty for carelessness?

2. To answer my own question, is the fear of a spinning blade strong enough to do that on its own? I know I’m not going to intentionally test the SawStop.

3. Can we use natural fears such as this in other areas of automation?

4. For great insight into human decision making, read this thread on a woodworking site. What would it take to change the mind of this first post-er?

When do we as adult woodworkers take responsibility and understand the dangers of woodworking. Most accidents happen due to not paying attention to what we’re doing. If we stay focused while we’re using power tools, or even hand tools, we eliminate accidents.”

Intuition vs Experience with Roundabouts

Some people might say a traffic circle is obvious. There is only one way to go.. who yields might be more difficult, but at least we are all driving in the same direction.

Not so.

The following two articles come down on the side of experience for the usability of roundabouts.

New Traffic Circle Causes Confusion

Death-crash car launches off the road and into a first floor flat

I am sure the designers believed that if millions of people in London and hundreds of thousands in New Orleans can handle a roundabout, these citizens of a town so small they don’t even bother to mention where it is would do fine.