Category Archives: alarms

Code Chartreuse – “Too many codes”

Enjoy memorizing this hospital sign!

How about just announcing the issue rather than matching it first with a color? For example: “Attention, tornado!” seems like it would be effective.

Elopement, by the way, means a patient with Alzheimer’s needs to be located. That makes “purple” a code within a code (and makes me want to watch Inception again). This is also one of the few I could understand wanting to disguise with a color.

“Shooter” is another candidate for obfuscation, although I imagine the shooter would quickly figure out that any announcements were about them, while hospital denizens look around and say “Huh, we’ve never heard code silver before. Sounds like something to do with Alzheimer’s.”

Photo credit Jason Boyles.

Mining Tragedy Update

There is new information on the West Virginia coal mine tragedy where the methane detectors were disabled to prevent automatic shut down of the machinery. This comes from NPR:

Methane monitors are mounted on the massive, 30-foot-long continuous miners because explosive gas can collect in pockets near the roofs of mines. Methane can be released as the machine cuts into rock and coal. The spinning carbide teeth that do the cutting send sparks flying when they cut into rock. The sparks and the gas are an explosive mix, so the methane monitor is designed to signal a warning and automatically shut down the machine when gas approaches dangerous concentrations.

Because the monitor continually shut down the machine:

On Feb. 13, an electrician deliberately disabled a methane gas monitor on a continuous mining machine because the monitor repeatedly shut down the machine.

Three witnesses say the electrician was ordered by a mine supervisor to “bridge” the automatic shutoff mechanism in the monitor.

There is some discussion as to whether the monitor was malfunctioning and shutting the machine down when it should not have or whether it was shutting down due to actual methane in the air. People in many industries willfully disable aids meant to keep them safe and malfunction is only one of the variables that affects the behavior (granted, it’s likely a big one). Here is one example from agriculture, collected for NIOSH through the FACE database program*:

A 26-year-old Hispanic male knitting machine operator died when he was crushed by moving parts within the knitting machine as he tried to make an adjustment.  The victim opened a safety gate and jammed a needle in the “on” button that allowed the machine to operate with the safety gates open.

Last, in at least this one case the safety cut-off contributed to an accident.

On June 4, 2004, a 47-year-old co-owner of a recycling business was run over and killed by a Gradall telescopic boom lift (rough-terrain forklift) while he was working underneath it. He had been operating the Gradall, and had shut it down when he momentarily exited the vehicle. When he returned to the machine, he found it would not restart. The Gradall had a safety interlock that prevented starting from the ignition switch while in gear. The contractor was apparently unaware of this safety feature. He checked the batteries, and then crawled underneath the cab area and reached up into the engine compartment with a screwdriver. The screwdriver made contact between the two terminals on the starter, effectively jump-starting the engine and bypassing the safety mechanism that prevented ignition while in gear. The Gradall started and moved forward. The parking brake was not set. The back left tire rolled over the contractor.

In short, I admire but do not envy the designers who have to create these dangerous systems. Their users are inventive, under pressure, and different from each other in countless ways. Designing safety sounds easy (one can imagine  “just make it shut off when they aren’t using it,”) but the answers seem far from being so simple. Many of the examples I have seen from other industries show quick and easy ways to bypass a safety system.

  • Machinery automatically cuts off after 8 seconds when there is no weight in the driver’s seat. Worker keeps a heavy tool bag nearby to put on the seat when the worker wants to check on things outside the cab.
  • Same system as above – worker tries to jump out of cab and complete task in less than 8 seconds.
  • Worker cannot reach objective with lap safety bar in place, a bar that must be down for machinery to operate. Worker lifts bar then puts it back down across empty seat and reaches for objective with machinery running.

There does seem to be a difference in premeditation in the examples I’ve come across and the idea of hiring an electrician to specifically and more permanently remove a guard from a safety system.

*I have posted on the FACE program before. It is a valuable repository.

Photo Credit NIOSH on Flickr

Ahem…your heart has stopped

Darin Ellis sends along this radio story about a woman’s robotic heart that has a malfunction warning system that literally breaks the textbook HF rules of alarm design.  I’ll let Darin explain the unfortunate issue:

This woman, who is living thanks to a robotic heart, related a story of the “heart” malfunctioning.  Apparently, although not prone to malfunction, there is a very particular way to recover from the malfunctioning state [it warns you via an alarm]

She was (luckily) at home. The alarms went off blaring like crazy.  Her young kids react to the alarm and start screaming and crying… Then she had to figure out what was wrong and try to remember how to fix it in the right order.  With the kids AND the alarm still blaring.  Anyone see what is wrong here, or is it just me?

I am sure she is very grateful for this “heart” but the story made me cringe.  I am sure that when your heart literally stops, you don’t need alarms blaring to tell you something is wrong

[Weblink to streaming audio at The Story]

(post image:  http://www.flickr.com/photos/lwr/105509373/sizes/o/)

1960′s Human Factors : The Titan II Missiles

I went on a trip to Tucson over the holidays and toured the last Titan II missile silo. A brief history: from 1963-1982 these missiles were part of the cold war “peace through deterrance” and “assured mutual destruction.” In essence, they provided one reason not to attack the US: even were we destroyed, these missiles would still launch to destroy the Soviet Union.

Politics aside, the control room and interfaces for these missiles were fascinating from a human factors perspective. Gauges, buttons, and rotary inputs reside where we now would expect screens and keyboards. I reflected on this while there: though you need a button for each function, at least the interface never changes.

I snapped the picture below as an example of users improving a system. It appears they are trying to reduce their memory demands by listing on labels the upper and lower boundaries of these controls. It reminded me of the draft beer handles added to the levers in a nuclear power plant (as discussed by Don Norman in “The Design of Everyday Things.“)

A little more history: The Titan sites do not have a perfect safety record. With 54 sites operating for almost 40 years, there were 4 recorded accidents, all where lives were lost and one early fire where 53 people died. Fortunately, none of these accidents resulted in a nuclear explosion, not even in one where the nuclear piece of the missile was blown out of the silo. This site provides a list and engineering analysis of the accidents, and I would be interested in a human factors analysis.

In the accident that ejected the nuclear warhead, the commonly reported story says the explosion occurred when the missile was being serviced and a repairman dropped a heavy tool on the fuel tank. This implies the explosion was instant, however it actually occurred over 8 hours later, as the fuel exited the breech. The best description I could find comes from a newspaper, the Arkansas Leader:

The missile was housed in a silo within a silo that consisted of eight levels. Maintenance crews were working on level two when the accident happened. Attached to the hydraulic standing platforms was a rubberized boot that flipped over between the missile and the platform to prevent anything from falling through if dropped.

The day missile 374-7 exploded, the boot didn’t keep the socket from falling. At 6:30 p.m., maintenance crews entered the silo to begin work after being delayed due to various unrelated equipment malfunctions. The eight- and three-quarter-pound socket fell, hit the standing platform and bounced toward the missile.

The boot had become too pliable through the years, and the socket fell 70 feet down the silo, hit the thrust mount and bounced into the side of the stage one fuel tank. The 100,000-gallon fuel tank emptied into the bottom of the silo. The fuels interacted and generated heat, which in turn increased the pressure on the tanks. At 8 p.m., the wing made the decision to evacuate the control center.

“When we did that, we had no readings and no way of telling what was going on out there,” Gray said. “We lost all readings,” Gray added.

Many attempts were made to get into the control center to see the readings, according to Gray. At 3 a.m., two people, Living-ston and Sgt. Jack Kennedy, made it into the complex. “When they made it in and had to back out because the fuel was so concentrated they couldn’t see, there was some controversy on who told them to turn on exhaust fan 105,” Gray said.

What that did, according to Gray, was pull the heavy concentration of fuel into the equipment area with all the electrical pumps.

“And automatically, boom!” Gray said. “The fire flashed back into the silo, which already had tremendous heat in there, and when the fire flashed back, the stage one oxidizer tank that was already very, very high in pressure, erupted.”

Within one hour of the accident, Gray found the nuclear warhead intact. “It was cracked, but it pegged out on the radio-activity scanner,” Gray said.

Lessons learned from this accident brought about security improvements near nuclear weapons. Security measures to prevent accidents include: all workers wearing a belt with lanyards to attach tools to, a cloth on the platform to reduce the chance of tools bouncing off the platform if they do fall and a renovation of the platforms.

One of our tour guides had actually been stationed at the silo. He was a great guide and a living piece of history. Consistent with what you might expect, he said the hardest times to keep the missile running and protected were the down times, hours of vigilance and inactivity.

Last, I also photographed some of the operation manuals at the museum. Apologies for the fuzziness of these pictures, and I’ll re-type the best bits:

7. Key Run Up procedure, if required. (figure 3-26C)….Performed

Reference SACR 100-24, Volume VI to determine if key run up is required.

NOTE

Step 8 can only be performed when SYNC indicator is lighted in NORM modes or TRACK/TRSHD indicators are lighted in SPCL modes.

8. DEMODULATOR CONTROL PRINT MODE thumbwheel switch ….. TEST

Observe printout on teleprinter. Printout is continuous series of characters RY’s or 64’s if transmitter site is transmitting idle message, or normal message traffic.

9. DEMODULATOR CONTROL PRINT MODE thumbwheel switch …Set as directed

*********

*PVD must be continuously monitored visually or aurally.

*The PVD may be monitored by either a team in the silo or a crew member in the control center utilizing the wire type maintenance net.

For entry into launch duct level one, the PFC will be positioned outside of the opened level two launch duct access door, with sufficient probes to reach in the launch duct unless the PVD is required on level one of the launch duct for a sniff check.

Generally, I notice a large number of if/then/or/only types of commands.

I have only one last thing to say: the fact that Tucson, AZ, Damascus, AK and Wichita, KS are still around is a testament to the power of training and practice over our human frailties.

Trust in Automation

I’ve heard a great deal about trust and automation over the years, but this has to be my favorite new example of over-reliance on a system.

GPS routed bus under bridge, company says
“The driver of the bus carrying the Garfield High School girls softball team that hit a brick and concrete footbridge was using a GPS navigation system that routed the tall bus under the 9-foot bridge, the charter company’s president said Thursday.Steve Abegg, president of Journey Lines in Lynnwood, said the off-the-shelf navigation unit had settings for car, motorcycle, bus or truck. Although the unit was set for a bus, it chose a route through the Washington Park Arboretum that did not provide enough clearance for the nearly 12-foot-high vehicle, Abegg said. The driver told police he did not see the flashing lights or yellow sign posting the bridge height.

“We haven’t really had serious problems with anything, but here it’s presented a problem that we didn’t consider,” Abegg said of the GPS unit. “We just thought it would be a safe route because, why else would they have a selection for a bus?””

Link to original story (with pictures of sheared bus and bridge)

Indeed, why WOULD “they” have a selection for a bus? Here is an excerpt from the manual (Disclosure: I am assuming it’s the same model):

Calculate Routes for – Lets you take full advantage of the routing information built in the City Navigator maps. Some roads have vehicle-based restrictions. For example, a street or gate may be accessible by emergency vehicles only, or a residential street may not allow commercial trucking traffic. By specifying which vehicle type you are driving, you can avoid being routed through an area that is prohibited for your type of vehicle. Likewise, the ******** III may give you access to roads or turns that wouldn’t be available to normal traffic. The following options are available:

  • Car/Motorcycle
  • Truck (large semi-tractor/trailer
  • Bus
  • Emergency (ambulance, fire department, police, etc.)
  • Taxi
  • Delivery (delivery vehicles)
  • Bicycle (avoids routing through interstates and major highways)
  • Pedestrian”

gps-screen.gif

If we can assume no automation can be 100% reliable, at what point to people put too much trust in the system? At what point do they ignore the system in favor of more difficult methods, such as a paper map?At what point is a system so misleading that it should not be offered at all? Sanchez (2006) addressed this question and related type and timing of error to amount of trust placed in the automation. Trust declined sharply (for a time) after an error, so we may assume the Seattle driver might have re-checked the route manually had other (less catastrophic) errors occurred in the past.*

The spokesman for the GPS company is quoted in the above article as stating:

“Stoplights aren’t in our databases, either, but you’re still expected to stop for stoplights.”

I didn’t read the whole manual, but I’m pretty sure it doesn’t say the GPS would warn you of stoplights, a closer analogy to the actual feature that contributed to the accident. This is a time where an apology and a promise of re-design might serve the company better than blaming their users.

*Not a good strategy for preventing accidents!

Other sources for information on trust and reliability of automated systems:

Lee, J.D. & See, K.A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46, 50-80.

Parasuraman, R. & Riley, V. (1997). Humans and automation: use, misuse, disuse, abuse. Human Factors, 39, 230-253.

Wiegmann, D. A., Rich, A., Zhang, H. (2001). Automated diagnostic aids: the effects of aid reliability on users’ trust and reliance. Theoretical Issues in Ergonomics Science, 2(4), 352-367.

The Cognitive Engineering Laboratory

Unusually quiet morning radio show

What if a Radio DJ hosted a morning show and no one heard?

Lesson learned! I will try to make certain to hit ‘publish’ at the end of this post.

From the article:

“”I’ve been doing the show three days a week for 10 months and always pressed the button at the right moment. Goodness knows why I forgot this time.

“Mr Dixon, the station’s only employee, will not fire his “excellent” breakfast show DJ, who is one of 35 volunteers who have learnt their radio skills from scratch.”