The BBC has reported the incident analysis of the Air France crash that killed 228 people was due to lack of pilot skill in dealing with a high altitude stall.
Here is a link to the BEA Report from the Bureau d’Enquetes et d’Analyses. It’s a frightening read, as they give a moment by moment analysis of the last minutes in the cockpit. No emergency was ever noted and there did not appear to be any mechanical failures. It appeared that the flight crew thought events were under control the entire time (despite the alarms.)
A GPS certainly makes life easier — and although I think many of us might consider what would happen if we were without it or it was unable to identify where we were, it is less often we consider how it may lead us astray.
One of our early postings on the Human Factors Blog was about a bus driver following GPS directions that led under a too-short bridge. His case was augmented by the fact that he had chosen the “bus” setting on the GPS and assumed any route produced was therefore safe for buses. The actual model of the GPS under the bus setting was only to add routes that only buses could take, such as HOV exits, rather than to limit any route.
NPR just posted stories of people in Death Valley who got lost from following GPS directions down roads that no longer existed. In one of the cases, their car got stuck for 5 days and resulted in the death of a child. After hearing numerous stories about inaccurate GPS directions from lost drivers, a ranger investigated the maps used by the GPS systems and found roads included in them that had been closed for years. How accurate and updated do GPS systems need to be to be considered safe? How can they address over-trust in potentially dangerous situations (e.g., death valley)?
The story of a UK man who was pulled through a small opening meant for steel poles is in the news again as the companies involved pleaded guilty to not having safety measures in place. Read the article here (with a picture of the machine).
I’ve excerpted the reported accident factors below:
“His clothing snagged on the machine and he was forced though an opening just 125mm wide on the machine head, suffering injuries that have caused lasting physical and psychological damage…. The HSE investigation into the incident on 19 December 2008 found there was no guarding in place to protect the worker from dangerous moving parts … HSE investigators also established that Matthew, then aged 23, was inexperienced in operating the machinery after being moved from a different line at the factory because of a lull in his regular workload. However, it was the lack of guarding that was deemed the decisive factor.”
Prosecuting, Chris Chambers said: “The machine could start, stop and restart without warning to the operator. As Matthew leaned through the hatch he was struck on the back of the shoulder and pulled through. Shoulder to feet he was pulled through the opening…the width of a CD case.”
Summary: Clothing entanglement, lack of guards, lack of warning, and inexperience. I think it is interesting that the tone of the article seems to try and reduce the potential effects of inexperience, perhaps because it sounds like “blaming the victim.” I don’t think inexperience should have that overtone — one would classify re-assigning a worker without proper training to a dangerous task as an organizational problem with the company, not due to fault on Matthew’s part for performing the job he was assigned.
A train trestle in Durham, NC has a clearance of 11’8″.
The typical height of a large rental truck ranges from 11’6″ (don’t bounce!) to 13’6″.
How often do you think about clearance when driving? Do you think you could adjust to thinking about it 100% of the time in your rental truck?
I’ve seen parking garages that have a hanging bar well before the low ceiling to notify drivers that they are not going to make it. The bar, on chains, will bang the front of the truck but not peel the top off as the bridge does. The trucks in this video are going to quickly, this warning would have to come well before they crossed the intersection. This solution probably has problems too. I’m sure there would be drivers who were planning to turn before the bridge that get mad that a bar hit their truck. Also, getting someone to pay for and maintain the bar might be difficult as the trestle owners want to blame the drivers (and so do other drivers, if you read the comments on the video.)
More video and information is availible at 11foot8.com. Videos copyright Jürgen Henn – 11foot8.com.
We’ve posted before on confusing bottles, even those with labels. This latest problem comes from a type of contact lens solution that burns your eyes if you use it immediately, but does not if you’ve let your contacts sit in it for a long period of time.
It is a hydrogen peroxide solution that you use overnight in a special lens case that causes the peroxide to fizz and clean the lenses. By morning, the caustic peroxide is neutralized, at which point you douse the lenses in the rinsing solution of your choice and put them in. My dispenser had issued a stern warning that I was never, ever to use the Clear Care as a rinse, or reinsert the lenses after any less than six hours of disinfection, or I would risk a corneal burn.
…The Institute for Safe Medication Practices (ISMP) reports the FDA has received hundreds of complaints of similar mishaps, and you can easily find many more such accounts online. In our little corridor of offices here at Consumers Union alone we discovered two cases – my daughter’s and that of a colleague who made the same mistake when she asked to borrow rinsing solution at a relative’s home.
…there is a narrow red warning strip at the top, but all you can see from the front is “use only lens case provided,” which isn’t terribly informative but critically important, because if you use the product in a regular lens case, it won’t neutralize the peroxide. The bottle comes with a little cardboard collar that says “do not put Clear Care directly in eye,” but it can easily be removed (or fall off). And the dispenser has a red tip, which supposedly signals that you shouldn’t put the solution directly in your eye. Have you ever heard of this? My daughter sure hadn’t.
And nowhere on the bottle is there an explicit warning that putting the stuff directly in your eye can cause a chemical burn. Or, for that matter, instructions on what to do if you make that mistake (rinse your eye with copious water or saline, it turns out)…
I made this mistake myself back in 1996 when visiting a friend and borrowing their contact lens solution. I even noticed the red tip and thought it was an attractive branding idea — moments before thinking I was going to go blind from the pain.
How would you address this warning issue? Any creative ideas? It is complicated by the solution changing hazard status after being neutralized.
Slate.com has a nice article on the difference between U.S. exit signs and the rest of the world, as well as a nice history of the evolution of the symbols. Here is an excerpt to get you interested:
The text-based American exit sign has its origins in the 1911 Triangle Shirtwaist Fire, a blaze in a downtown Manhattan garment factory that killed 146 workers. Although signage was not primarily to blame for those fatalities—many factory doors were bolted shut in an effort to keep employees from slipping out—the exits were not clearly marked. That massive loss of life spurred the National Fire Protection Association, which had been founded in 1896 by insurance companies to develop protocols for property preservation, to take up what it called “life safety”: the business of getting people out of burning buildings intact. In the 1930s and ’40s, the NFPA developed criteria for emergency-exit signage, evaluating contrast levels and testing different sizes and stroke widths for lettering, eventually publishing standards that were adopted by state and local governments across the land.
Darin Ellis sends along this radio story about a woman’s robotic heart that has a malfunction warning system that literally breaks the textbook HF rules of alarm design. I’ll let Darin explain the unfortunate issue:
This woman, who is living thanks to a robotic heart, related a story of the “heart” malfunctioning. Apparently, although not prone to malfunction, there is a very particular way to recover from the malfunctioning state [it warns you via an alarm]
She was (luckily) at home. The alarms went off blaring like crazy. Her young kids react to the alarm and start screaming and crying… Then she had to figure out what was wrong and try to remember how to fix it in the right order. With the kids AND the alarm still blaring. Anyone see what is wrong here, or is it just me?
I am sure she is very grateful for this “heart” but the story made me cringe. I am sure that when your heart literally stops, you don’t need alarms blaring to tell you something is wrong
The NYTimes has an interesting OpEd where they asked various designers to re-imagine the homeland advisory system. It’s a multimedia presentation with narration from the graphic designers. Not much warnings research but interesting. Here is what it looks like now:
and here is one proposed redesign that, according to the designer, takes advantage of our ability read emotions from eyes: