I enjoy the mix of economics and psychology, which is why I am a faithful reader of the Freakanomics blog. Their recent podcast on “pain” started off with a good human-factors-related tale of the problematic design of a subway alarm system. I have included a link below to the podcast, but the quick overview is that there is an ear piercing alarm that is triggered by using the “emergency” exit, which is invariably used every day by someone wanting to get out faster than turnstiles permit.
The person breaking the rules has to hear the alarm for the shortest period of time and face no repercussions. The law abiding citizens waiting in line to exit get to listen to the alarm the longest.
“This is an emergency announcement. We may shortly need to make an emergency landing on water.”
Passenger Michelle Lord said: “People were terrified, we all thought we were going to die. They said the pilot hit the wrong button because they were so close together.”
I certainly see the point of an automated message, since in the event of an upcoming crash the crew is almost certainly busy. But the heart attack I might have upon hearing the message in error would render a crash moot.
A train trestle in Durham, NC has a clearance of 11’8″.
The typical height of a large rental truck ranges from 11’6″ (don’t bounce!) to 13’6″.
How often do you think about clearance when driving? Do you think you could adjust to thinking about it 100% of the time in your rental truck?
I’ve seen parking garages that have a hanging bar well before the low ceiling to notify drivers that they are not going to make it. The bar, on chains, will bang the front of the truck but not peel the top off as the bridge does. The trucks in this video are going to quickly, this warning would have to come well before they crossed the intersection. This solution probably has problems too. I’m sure there would be drivers who were planning to turn before the bridge that get mad that a bar hit their truck. Also, getting someone to pay for and maintain the bar might be difficult as the trestle owners want to blame the drivers (and so do other drivers, if you read the comments on the video.)
More video and information is availible at 11foot8.com. Videos copyright Jürgen Henn – 11foot8.com.
How about just announcing the issue rather than matching it first with a color? For example: “Attention, tornado!” seems like it would be effective.
Elopement, by the way, means a patient with Alzheimer’s needs to be located. That makes “purple” a code within a code (and makes me want to watch Inception again). This is also one of the few I could understand wanting to disguise with a color.
“Shooter” is another candidate for obfuscation, although I imagine the shooter would quickly figure out that any announcements were about them, while hospital denizens look around and say “Huh, we’ve never heard code silver before. Sounds like something to do with Alzheimer’s.”
There is new information on the West Virginia coal mine tragedy where the methane detectors were disabled to prevent automatic shut down of the machinery. This comes from NPR:
Methane monitors are mounted on the massive, 30-foot-long continuous miners because explosive gas can collect in pockets near the roofs of mines. Methane can be released as the machine cuts into rock and coal. The spinning carbide teeth that do the cutting send sparks flying when they cut into rock. The sparks and the gas are an explosive mix, so the methane monitor is designed to signal a warning and automatically shut down the machine when gas approaches dangerous concentrations.
Because the monitor continually shut down the machine:
On Feb. 13, an electrician deliberately disabled a methane gas monitor on a continuous mining machine because the monitor repeatedly shut down the machine.
Three witnesses say the electrician was ordered by a mine supervisor to “bridge” the automatic shutoff mechanism in the monitor.
There is some discussion as to whether the monitor was malfunctioning and shutting the machine down when it should not have or whether it was shutting down due to actual methane in the air. People in many industries willfully disable aids meant to keep them safe and malfunction is only one of the variables that affects the behavior (granted, it’s likely a big one). Here is one example from agriculture, collected for NIOSH through the FACE database program*:
A 26-year-old Hispanic male knitting machine operator died when he was crushed by moving parts within the knitting machine as he tried to make an adjustment. The victim opened a safety gate and jammed a needle in the “on” button that allowed the machine to operate with the safety gates open.
On June 4, 2004, a 47-year-old co-owner of a recycling business was run over and killed by a Gradall telescopic boom lift (rough-terrain forklift) while he was working underneath it. He had been operating the Gradall, and had shut it down when he momentarily exited the vehicle. When he returned to the machine, he found it would not restart. The Gradall had a safety interlock that prevented starting from the ignition switch while in gear. The contractor was apparently unaware of this safety feature. He checked the batteries, and then crawled underneath the cab area and reached up into the engine compartment with a screwdriver. The screwdriver made contact between the two terminals on the starter, effectively jump-starting the engine and bypassing the safety mechanism that prevented ignition while in gear. The Gradall started and moved forward. The parking brake was not set. The back left tire rolled over the contractor.
In short, I admire but do not envy the designers who have to create these dangerous systems. Their users are inventive, under pressure, and different from each other in countless ways. Designing safety sounds easy (one can imagine “just make it shut off when they aren’t using it,”) but the answers seem far from being so simple. Many of the examples I have seen from other industries show quick and easy ways to bypass a safety system.
Machinery automatically cuts off after 8 seconds when there is no weight in the driver’s seat. Worker keeps a heavy tool bag nearby to put on the seat when the worker wants to check on things outside the cab.
Same system as above – worker tries to jump out of cab and complete task in less than 8 seconds.
Worker cannot reach objective with lap safety bar in place, a bar that must be down for machinery to operate. Worker lifts bar then puts it back down across empty seat and reaches for objective with machinery running.
There does seem to be a difference in premeditation in the examples I’ve come across and the idea of hiring an electrician to specifically and more permanently remove a guard from a safety system.
*I have posted on the FACE program before. It is a valuable repository.
Darin Ellis sends along this radio story about a woman’s robotic heart that has a malfunction warning system that literally breaks the textbook HF rules of alarm design. I’ll let Darin explain the unfortunate issue:
This woman, who is living thanks to a robotic heart, related a story of the “heart” malfunctioning. Apparently, although not prone to malfunction, there is a very particular way to recover from the malfunctioning state [it warns you via an alarm]
She was (luckily) at home. The alarms went off blaring like crazy. Her young kids react to the alarm and start screaming and crying… Then she had to figure out what was wrong and try to remember how to fix it in the right order. With the kids AND the alarm still blaring. Anyone see what is wrong here, or is it just me?
I am sure she is very grateful for this “heart” but the story made me cringe. I am sure that when your heart literally stops, you don’t need alarms blaring to tell you something is wrong
I went on a trip to Tucson over the holidays and toured the last Titan II missile silo. A brief history: from 1963-1982 these missiles were part of the cold war “peace through deterrance” and “assured mutual destruction.” In essence, they provided one reason not to attack the US: even were we destroyed, these missiles would still launch to destroy the Soviet Union.
Politics aside, the control room and interfaces for these missiles were fascinating from a human factors perspective. Gauges, buttons, and rotary inputs reside where we now would expect screens and keyboards. I reflected on this while there: though you need a button for each function, at least the interface never changes.
I snapped the picture below as an example of users improving a system. It appears they are trying to reduce their memory demands by listing on labels the upper and lower boundaries of these controls. It reminded me of the draft beer handles added to the levers in a nuclear power plant (as discussed by Don Norman in “The Design of Everyday Things.“)
A little more history: The Titan sites do not have a perfect safety record. With 54 sites operating for almost 40 years, there were 4 recorded accidents, all where lives were lost and one early fire where 53 people died. Fortunately, none of these accidents resulted in a nuclear explosion, not even in one where the nuclear piece of the missile was blown out of the silo. This site provides a list and engineering analysis of the accidents, and I would be interested in a human factors analysis.
In the accident that ejected the nuclear warhead, the commonly reported story says the explosion occurred when the missile was being serviced and a repairman dropped a heavy tool on the fuel tank. This implies the explosion was instant, however it actually occurred over 8 hours later, as the fuel exited the breech. The best description I could find comes from a newspaper, the Arkansas Leader:
The missile was housed in a silo within a silo that consisted of eight levels. Maintenance crews were working on level two when the accident happened. Attached to the hydraulic standing platforms was a rubberized boot that flipped over between the missile and the platform to prevent anything from falling through if dropped.
The day missile 374-7 exploded, the boot didn’t keep the socket from falling. At 6:30 p.m., maintenance crews entered the silo to begin work after being delayed due to various unrelated equipment malfunctions. The eight- and three-quarter-pound socket fell, hit the standing platform and bounced toward the missile.
The boot had become too pliable through the years, and the socket fell 70 feet down the silo, hit the thrust mount and bounced into the side of the stage one fuel tank. The 100,000-gallon fuel tank emptied into the bottom of the silo. The fuels interacted and generated heat, which in turn increased the pressure on the tanks. At 8 p.m., the wing made the decision to evacuate the control center.
“When we did that, we had no readings and no way of telling what was going on out there,” Gray said. “We lost all readings,” Gray added.
Many attempts were made to get into the control center to see the readings, according to Gray. At 3 a.m., two people, Living-ston and Sgt. Jack Kennedy, made it into the complex. “When they made it in and had to back out because the fuel was so concentrated they couldn’t see, there was some controversy on who told them to turn on exhaust fan 105,” Gray said.
What that did, according to Gray, was pull the heavy concentration of fuel into the equipment area with all the electrical pumps.
“And automatically, boom!” Gray said. “The fire flashed back into the silo, which already had tremendous heat in there, and when the fire flashed back, the stage one oxidizer tank that was already very, very high in pressure, erupted.”
Within one hour of the accident, Gray found the nuclear warhead intact. “It was cracked, but it pegged out on the radio-activity scanner,” Gray said.
Lessons learned from this accident brought about security improvements near nuclear weapons. Security measures to prevent accidents include: all workers wearing a belt with lanyards to attach tools to, a cloth on the platform to reduce the chance of tools bouncing off the platform if they do fall and a renovation of the platforms.
One of our tour guides had actually been stationed at the silo. He was a great guide and a living piece of history. Consistent with what you might expect, he said the hardest times to keep the missile running and protected were the down times, hours of vigilance and inactivity.
Last, I also photographed some of the operation manuals at the museum. Apologies for the fuzziness of these pictures, and I’ll re-type the best bits:
7. Key Run Up procedure, if required. (figure 3-26C)….Performed
Reference SACR 100-24, Volume VI to determine if key run up is required.
Step 8 can only be performed when SYNC indicator is lighted in NORM modes or TRACK/TRSHD indicators are lighted in SPCL modes.
8. DEMODULATOR CONTROL PRINT MODE thumbwheel switch ….. TEST
Observe printout on teleprinter. Printout is continuous series of characters RY’s or 64’s if transmitter site is transmitting idle message, or normal message traffic.
9. DEMODULATOR CONTROL PRINT MODE thumbwheel switch …Set as directed
*PVD must be continuously monitored visually or aurally.
*The PVD may be monitored by either a team in the silo or a crew member in the control center utilizing the wire type maintenance net.
For entry into launch duct level one, the PFC will be positioned outside of the opened level two launch duct access door, with sufficient probes to reach in the launch duct unless the PVD is required on level one of the launch duct for a sniff check.
Generally, I notice a large number of if/then/or/only types of commands.
I have only one last thing to say: the fact that Tucson, AZ, Damascus, AK and Wichita, KS are still around is a testament to the power of training and practice over our human frailties.
I’ve heard a great deal about trust and automation over the years, but this has to be my favorite new example of over-reliance on a system.
GPS routed bus under bridge, company says
“The driver of the bus carrying the Garfield High School girls softball team that hit a brick and concrete footbridge was using a GPS navigation system that routed the tall bus under the 9-foot bridge, the charter company’s president said Thursday.Steve Abegg, president of Journey Lines in Lynnwood, said the off-the-shelf navigation unit had settings for car, motorcycle, bus or truck. Although the unit was set for a bus, it chose a route through the Washington Park Arboretum that did not provide enough clearance for the nearly 12-foot-high vehicle, Abegg said. The driver told police he did not see the flashing lights or yellow sign posting the bridge height.
“We haven’t really had serious problems with anything, but here it’s presented a problem that we didn’t consider,” Abegg said of the GPS unit. “We just thought it would be a safe route because, why else would they have a selection for a bus?””
Indeed, why WOULD “they” have a selection for a bus? Here is an excerpt from the manual (Disclosure: I am assuming it’s the same model):
“Calculate Routes for – Lets you take full advantage of the routing information built in the City Navigator maps. Some roads have vehicle-based restrictions. For example, a street or gate may be accessible by emergency vehicles only, or a residential street may not allow commercial trucking traffic. By specifying which vehicle type you are driving, you can avoid being routed through an area that is prohibited for your type of vehicle. Likewise, the ******** III may give you access to roads or turns that wouldn’t be available to normal traffic. The following options are available:
Truck (large semi-tractor/trailer
Emergency (ambulance, fire department, police, etc.)
Delivery (delivery vehicles)
Bicycle (avoids routing through interstates and major highways)
If we can assume no automation can be 100% reliable, at what point to people put too much trust in the system? At what point do they ignore the system in favor of more difficult methods, such as a paper map?At what point is a system so misleading that it should not be offered at all? Sanchez (2006) addressed this question and related type and timing of error to amount of trust placed in the automation. Trust declined sharply (for a time) after an error, so we may assume the Seattle driver might have re-checked the route manually had other (less catastrophic) errors occurred in the past.*
The spokesman for the GPS company is quoted in the above article as stating:
“Stoplights aren’t in our databases, either, but you’re still expected to stop for stoplights.”
I didn’t read the whole manual, but I’m pretty sure it doesn’t say the GPS would warn you of stoplights, a closer analogy to the actual feature that contributed to the accident. This is a time where an apology and a promise of re-design might serve the company better than blaming their users.
*Not a good strategy for preventing accidents!
Other sources for information on trust and reliability of automated systems: