Category Archives: safety

Hawaii False Alarm: The story that keeps on giving

Right after the Hawaii false nuclear alarm, I posted about how the user interface seemed to contribute to the error. At the time, sources were reporting it as a “dropdown” menu. Well, that wasn’t exactly true, but in the last few weeks it’s become clear that truth is stranger than fiction. Here is a run-down of the news on the story (spoiler, every step is a human factors-related issue):

  • Hawaii nuclear attack alarms are sounded, also sending alerts to cell phones across the state
  • Alarm is noted as false and the state struggles to get that message out to the panicked public
  • Error is blamed on a confusing drop-down interface: “From a drop-down menu on a computer program, he saw two options: “Test missile alert” and “Missile alert.”
  • The actual interface is found and shown – rather than a drop-down menu it’s just closely clustered links on a 1990s-era website-looking interface that say “DRILL-PACOM(CDW)-STATE ONLY” and “PACOM(CDW)-STATE ONLY”
  • It comes to light that part of the reason the wrong alert stood for 38 minutes was because the Governor didn’t remember his twitter login and password
  • Latest news: the employee who sounded the alarm says it wasn’t an error, he heard this was “not a drill” and acted accordingly to trigger the real alarm

The now-fired employee has spoken up, saying he was sure of his actions and “did what I was trained to do.” When asked what he’d do differently, he said “nothing,” because everything he saw and heard at the time made him think this was not a drill. His firing is clearly an attempt by Hawaii to get rid of a ‘bad apple.’ Problem solved?

It seems like a good time for my favorite reminder from Sidney Dekker’s book, “The Field Guide to Human Error Investigations” (abridged):

To protect safe systems from the vagaries of human behavior, recommendations typically propose to:

    • Tighten procedures and close regulatory gaps. This reduces the bandwidth in which people operate. It leaves less room for error.
    • Introduce more technology to monitor or replace human work. If machines do the work, then humans can no longer make errors doing it. And if machines monitor human work, they ca
    snuff out any erratic human behavior.
    • Make sure that defective practitioners (the bad apples) do not contribute to system breakdown again. Put them on “administrative leave”; demote them to a lower status; educate or pressure them to behave better next time; instill some fear in them and their peers by taking them to court or reprimanding them.

In this view of human error, investigations can safely conclude with the label “human error”—by whatever name (for example: ignoring a warning light, violating a procedure). Such a conclusion and its implications supposedly get to the causes of system failure.

AN ILLUSION OF PROGRESS ON SAFETY
The shortcomings of the bad apple theory are severe and deep. Progress on safety based on this view is often a short-lived illusion. For example, focusing on individual failures does not take away the underlying problem. Removing “defective” practitioners (throwing out the bad apples) fails to remove the potential for the errors they made.

…[T]rying to change your people by setting examples, or changing the make-up of your operational workforce by removing bad apples, has little long-term effect if the basic conditions that people work under are left unamended.

A ‘bad apple’ is often just a scapegoat that makes people feel better by giving a focus for blame. Real improvements and safety happen by improving the system, not by getting rid of employees who were forced to work within a problematic system.

Outside Magazine profile’s Anne’s rock climbing & human factors research

Anne’s research on attention and rock climbing was recently featured in an article in Outside Magazine:

To trad climb is to be faced with hundreds of such split-second micro decisions, the consequences of which can be fatal. That emphasis on human judgment and its fallibility intrigued Anne McLaughlin, a psychology professor at North Carolina State University. An attention and behavior researcher, she set out to model how and why rock climbers make decisions, and she’d recruited Weil and 31 other trad climbers to contribute data to the project.

The idea for the study first came about at the crag. In 2011, McLaughlin, Chris Wickens, a psychology professor at Colorado State University, and John Keller, an engineer at Alion Science and Technology, converged in Las Vegas for the Human Factors and Ergonomics Society conference, an annual event that brings together various professionals practicing user-focused product design. With Red Rocks just a few minutes away, the three avid climbers were eager to get some time on the rock before the day’s sessions, says Keller, even if it meant starting at 3 a.m.

Tesla counterpoint: “40% reduction in crashes” with introduction of Autosteer

I posted yesterday about the challenges of fully autonomous cars and cars that approach autonomy. Today I bring you a story about the successes of semi-automatic features in automobiles.

Tesla has a feature called Autopilot that assists the driver without being completely autonomous. Autopilot includes car-controlled actions such as collision warnings, automatic emergency braking, and automatic lane keeping. Tesla classifies the Autopilot features as Level 2 automation. (Level 5 is considered fully autonomous). Rich has already given our thoughts about calling this Autopilot in a previous post. One particular feature is called AutoSteer, described in the NHTSA report as:

The Tesla Autosteer system uses information from the forward-looking camera, the radar sensor, and the ultrasonic sensors, to detect lane markings and the presence of vehicles and objects to provide automated lane-centering steering control based on the lane markings and the vehicle directly in front of the Tesla, if present. The Tesla owner’s manual contains the following warnings: 1) “Autosteer is intended for use only on highways and limited-access roads with a fully attentive driver. When using Autosteer, hold the steering wheel and be mindful of road conditions and surrounding traffic. Do not use Autosteer on city streets, in construction zones, or in areas where bicyclists or pedestrians may be present. Never depend on Autosteer to determine an appropriate driving path. Always be prepared to take immediate action. Failure to follow these instructions could cause serious property damage, injury or death;” and 2) “Many unforeseen circumstances can impair the operation of Autosteer. Always keep this in mind and remember that as a result, Autosteer may not steer Model S appropriately. Always drive attentively and be prepared to take immediate action.” The system does not prevent operation on any road types.

An NHTSA report looking into a fatal Tesla crash also noted that the introduction of Autosteer corresponded to a 40% reduction in automobile crashes. That’s a lot considering Dr. Gill Pratt from Toyota said he might be happy with a 1% change.

Autopilot was enabled in October, 2015, so there has been a good period of time for post-autopilot crash data to be generated.

Toyota Gets It: Self-driving cars depend more on people than on engineering

I recommend reading this interview with Toyota’s Dr. Gill Pratt in its entirety. He discusses pont-by-point the challenges of a self-driving car that we consider in human factors, but don’t hear much about in the media. For example:

  • Definitions of autonomy vary. True autonomy is far away. He gives the example of a car performing well on an interstate or in light traffic compared to driving through the center of Rome during rush hour.
  • Automation will fail. And the less it fails, the less prepared the driver is to assume control.
  • Emotionally we cannot accept autonomous cars that kill people, even if it reduces overall crash rates and saves lives in the long run.
  • It is difficult to run simulations with the autonomous cars that capture the extreme variability of the human drivers in other cars.

I’ll leave you with the last paragraph in the interview as a summary:

So to sum this thing up, I think there’s a general desire from the technical people in this field to have both the press and particularly the public better educated about what’s really going on. It’s very easy to get misunderstandings based on words like or phrases like “full autonomy.” What does full actually mean? This actually matters a lot: The idea that only the chauffeur mode of autonomy, where the car drives for you, that that’s the only way to make the car safer and to save lives, that’s just false. And it’s important to not say, “We want to save lives therefore we have to have driverless cars.” In particular, there are tremendous numbers of ways to support a human driver and to give them a kind of blunder prevention device which sits there, inactive most of the time, and every once in a while, will first warn and then, if necessary, intervene and take control. The system doesn’t need to be competent at everything all of the time. It needs to only handle the worst cases.

Institutional Memory, Culture, & Disaster

I admit a fascination for reading about disasters. I suppose I’m hoping for the antidote. The little detail that will somehow protect me next time I get into a plane, train, or automobile. A gris-gris for the next time I tie into a climbing rope. Treating my bike helmet as a talisman for my commute. So far, so good.

As human factors psychologists and engineers, we often analyze large scale accidents and look for the reasons (pun intended) that run deeper than a single operator’s error. You can see some of my previous posts on Wiener’s Laws, Ground Proximity Warnings, and the Deep Water Horizon oil spill.

So, I invite you to read this wonderfully detailed blog post by Ron Rapp about how safety culture can slowly derail, “normalizing deviance.”

Bedford and the Normalization of Deviance

He tells the story of a chartered plane crash in Bedford, Massachusetts in 2014, a take-off with so many skipped safety steps and errors that it seemed destined for a crash. There was plenty of time for the pilot stop before the crash, leading Rapp to say “It’s the most inexplicable thing I’ve yet seen a professional pilot do, and I’ve seen a lot of crazy things. If locked flight controls don’t prompt a takeoff abort, nothing will.” He sums up the reasons for these pilot’s “deviant” performance via Diane Vaughn’s factors of normalization (some interpretation on my part, here):

  • If rules and checklists and regulations are difficult, tedious, unusable, or interfere with the goal of the job at hand, they will be misused or ignored.
  • We can’t treat top-down training or continuing education as the only source of information. People pass on shortcuts, tricks, and attitudes to each other.
  • Reward the behaviors you want. But we tend to punish safety behaviors when they delay secondary (but important) goals, such as keeping passengers happy.
  • We can’t ignore the social world of the pilots and crew. Speaking out against “probably” unsafe behaviors is at least as hard as calling out a boss or coworker who makes “probably” racist or sexist comments. The higher the ambiguity, the less likely people take action (“I’m sure he didn’t mean it that way.” or “Well, we skipped that list, but it’s been fine the ten times so far.”)
  • The cure? An interdisciplinary solution coming from human factors psychologists, designers, engineers, and policy makers. That last group might be the most important, in that they recognize a focus on safety is not necessarily more rules and harsher punishments. It’s checking that each piece of the system is efficient, valued, and usable and that those systems work together in an integrated way.

    Thanks to Travis Bowles for the heads-up on this article.
    Feature photo from the NTSB report, photo credit to the Massachusetts Police.

    Human Factors Potpourri

    Some recent items in the news with a human factors angle:

    • What happened to Google Maps?  Interesting comparison of Google Maps from 2010/2016 by designer/cartographer Justin O’Beirne.
    • India will use 3D paintings to slow down drivers.  Excellent use of optical illusions for road safety.
    • Death by GPS.  GPS mis-routing is the easiest and most relatable example of human-automaiton interaction.  Unfortunately, to its detriment, this article does not discuss the automation literature, instead focusing on more basic processes that, I think, are less relevant.

    Wiener’s Laws

    The article “The Human Factor” in Vanity Fair is two years old, but since I can’t believe I missed posting it — here it is! It’s a riveting read with details of the Air France Flight 447 accident and intelligent discussion of the impact automation has on human performance. Dr. Nadine Sarter is interviewed and I learned of a list of flight-specific “laws” developed by Dr. Earl Wiener, a past-president of HFES.

    “Wiener’s Laws,” from the article and from Aviation Week:

    • Every device creates its own opportunity for human error.
    • Exotic devices create exotic problems.
    • Digital devices tune out small errors while creating opportunities for large errors.
    • Invention is the mother of necessity.
    • Some problems have no solution.
    • It takes an airplane to bring out the worst in a pilot.
    • Whenever you solve a problem, you usually create one. You can only hope that the one you created is less critical than the one you eliminated.
    • You can never be too rich or too thin (Duchess of Windsor) or too careful about what you put into a digital flight-guidance system (Wiener).
    • Complacency? Don’t worry about it.
    • In aviation, there is no problem so great or so complex that it cannot be blamed on the pilot.
    • There is no simple solution out there waiting to be discovered, so don’t waste your time searching for it.
    • If at first you don’t succeed… try a new system or a different approach.
    • In God we trust. Everything else must be brought into your scan.
    • It takes an airplane to bring out the worst in a pilot.
    • Any pilot who can be replaced by a computer should be.
    • Today’s nifty, voluntary system is tomorrow’s F.A.R.

    Kudos to the author, William Langewiesche, for a well researched and well written piece.

    Helmet Design and Environment Interaction

    I wanted a new helmet that offered some side-impact protection to replace my trusty Petzl Ecrin Roc, especially after a helmet-less Slovenian climber mocked me in Italy for wearing “such a heavy helmet” at a sport climbing crag.

    I now own the Petzl Meteor, but after one trip discovered a strange design flaw.

    Most helmets clip together the way carseats or backpack buckles clip together:
    clip

    The Petzl Meteor helmet has a similar clip, but also contains magnets that draw the buckle together. Here is how it should work:

    I was climbing at Lover’s Leap in California, a granite cliff. Those of you who know your geology might guess what happens when you combine magnets and iron-rich granite. I put the helmet on the ground while sorting gear, put it back on and heard the buckle snap together. A few minutes later, I looked down (which put some strain on the helmet strap), the buckle popped open, and the helmet fell off my head.

    When I examined the buckle, there was grit stuck to the magnet.

    Iron grit on magnet
    Iron grit on magnet

    Wiping it off seemed to work, except that it moved some of it to the sides rather than just the top. My fingers weren’t small enough to wipe it from the sides. So, the next time I snapped it shut and checked to make sure it was locked, I couldn’t get it off. The grit on the side prevented the buckle from pinching enough to release. I was finally able to get it off the sides by using part of a strap to get into the crevices.

    Iron grit on sides

    I made some videos of the phenomenon. It was pretty easy to do, I just had to put my helmet on the ground for a moment and pick it up again. Attached grit was guaranteed – these are strong magnets!

    I am not the only person to notice this:

    In one review of another helmet with a similar closure:

    The only issue I had with the buckle came after wearing the Sirocco while bolting and cleaning a granite sport route. Some of the swirling granite dust adhered to the magnets, obstructing the clips. It was easy enough to fix: I just wiped the magnets clean, and it has worked perfectly since.

    and:

    Helmet review from Outdoor Gear Lab

    What we found in our tests of both the Meteor and the Sirocco was that the magnet did not always have enough oomph to click both small arms of the buckle completely closed. About one in four times, only one of the plastic arms would fasten and the buckle would need an extra squeeze to click the other arm in. Another thing our testers noticed was that the magnet would pick up tiny pebbles which would prevent the buckle from fully closing. The pebbles can be easily cleaned by brushing off the exposed part of the magnet, but it adds an extra step to applying the helmet. The bottom line is, we prefer the simplicity of the old plastic buckle. We think that the magnet is a gimmick which potentially makes a less safe helmet.

    Safety gear shouldn’t add steps to be remembered, such as making sure the buckle is locked, even after getting auditory and tactile feedback when one connected it. Some people may never climb in an area with iron in the ground, but the use-case for a granite environment should have been considered. You know, for little climbing areas such as the granite cliffs of Yosemite.

    Rock Climbing Human Factors – Harness attachment points

    A friend of mine was recently rappelling from a climb, meaning that she had the rope through a device that was connected to her belay loop on her harness. As she rappelled, she yelled that her harness broke, and the waistband of the harness slid nearly to her armpits. Fortunately, she remained calm and collected, and was still able to rappell safely, if awkwardly, to the ground. On the ground, her partner saw that her waistband with belay loop had become disconnected from her leg loops. The leg loops were intact, though a keeper-strap that helps the leg loops stay centered was no longer connected.

    So, what happened?

    First, for the non-climbers, a primer. A climbing harness is composed of three major parts, attached to each other in various ways depending on the manufacturer. The first part is the waistband, which is load-bearing, meaning that it is meant to take the weight of a climber.

    The second part of the harness is the belay loop, a load-bearing stitched circle that connects the waistband and leg loops and is also used to hold a belay device, to hold the climber’s weight when rappelling, and for anchoring to the ground or a wall when needed.

    The last part of the harness is the leg loops, which are also load-bearing in the parts that connect to the belay loop and around the legs themselves.

    Figure 1 shows the general composition of climbing harnesses, with these three parts diagrammed in the Base Concept.

    harness_leg_loop_error
    Figure 1. Simplified diagrams of climbing harnesses.

    On most harnesses, the leg loops are kept connected to the belay loop by a “keeper strap.” This is usually a weak connection not meant to bear weight, but only to keep the leg loops centered on the harness (shown in blue in figure 1). In the case study that prompted this blog post, the keeper strap was connected through the belay loop, rather than the full-strength leg loops (figure 2.) When loaded, it came apart, separating the leg loops from the waistbelt. My own tests found that the keeper strap can be very strong, when it is loaded on the strap itself. But if the leg loops move so that the keeper buckle is loaded by the belay loop, it comes apart easily.

    errorharness
    Figure 2. Harness assembled with keeper strap bearing weight via the belay loop.

    There are two ways to mis-attach leg loops to the belay loop of a harness. The first way is by connecting the leg loops back to the harness, after they were removed, using the keeper strap. The video below demonstrates this possibility. Once connected, the harness fits well and gives little indication the leg loops are not actually connected to bear weight.

    The second (and I think more likely) way is by having the leg loops disconnected from the back of the harness, usually for a bathroom break or to get in and out of the harness. The leg loops are still connected in the front of the harness, but if a leg loop passes through the belay loop, suddenly the keeper strap is load bearing when the leg loops flip around. However, the harness does not fit differently nor does it look particularly different unless carefully inspected. Video below.

    The non-load bearing parts of the harness are what determine the possibility for this error. In figure 1, some harnesses either do not allow disconnection of the leg loops in back or only allow their disconnection in tandem. When the leg loops are connected in this way, the front of the leg loops cannot be passed through the belay loop. Video demonstration below.

    Back to figure 1, some harnesses allow the disconnection of leg loops for each leg. If these are disconnected, a loop may be passed through the front belay loop, resulting in the error in figure 2.

    In sum, this error can be examined for likelihood and severity. It is not likely that the error occurs, however if it does occur it is likely it will go undiscovered until the keeper strap comes apart. For severity, the error could be lethal, although that is not likely. The waistbelt will hold the climber’s weight and having leg loops and a waistbelt is a (comfortable) redundancy. However, the sudden shock of suddenly losing support from the leg loops could cause loss of control, either for an un-backed-up rappell or while belaying another climber.

    What are the alternatives?

  • Climbing is exploding, particularly climbing in gyms. The “gym” harnesses, with fewer components and gear loops (Figure 1), are a good option for most climbers now. However, there is little guidance about what harness one should buy for the gym vs. outdoor versatility so few probably know this harness is a good option.
  • Some harnesses are designed to be load-bearing at all points (i.e., “SafeTech” below). It is impossible to make an error in leg loop attachment.
  • safetech

  • Harnesses with permanently attached leg loops or loops that attach in the back with a single point are unlikely to result in the error.
  • Many climbers reading this are thinking “This would never happen to me” or “You’d have to be an idiot to put your harness together like that” or my usual favorite “If you wanted climbing to be perfectly safe, you shouldn’t even go.” Blaming the victim gives us a feeling of control over our own safety. However, there are other instances where gear was assembled or re-assembled incorrectly with tragic consequences. No one (or their child) deserves to pay with their life for a simple mistake that can be prevented through good design.

    Short Course in Anthropometry

    NPR just ran an extremely detailed article on the importance and study of anthropometry. The topic is the undue stress nursing places on the spine, even when “proper” lifting procedures are followed. Highlighed is the work of Bill Marras (recent Editor of the journal Human Factors), who developed a sensor rig for the forces experienced by the spine. Read for yourself, but if you want the high points:

    “Moving and lifting patients manually is dangerous even for veteran nursing staff, Marras says, for several reasons:

    • The laws of physics dictate that it’s easiest to lift something when it’s close to your body. But nursing employees have to stand at the side of the bed, relatively far from the patient.

    • Nursing employees also often bend over the patient. That’s important, because there’s a chain of bones along the spine, called facet joints, hidden under the little bumps protruding under the skin. Those bones interconnect and help absorb loads when standing straight. Marras says that when nurses lift as they’re bending, those bones disengage and their disks take most of the force. Those forces are “much, much higher than what you’d expect in an assembly line worker,” he says.

    • When nurses keep working under these loads, it causes microscopic tears in the “end plates,” which are films as thin as credit cards above and below each disc. Those tears lead to scar tissue, which can block the flow of nutrients into the disks — until, eventually, the disks start to collapse. “You could be doing this damage [to your back] for weeks or months or years, and never realize it,” says Marras. “The event that caused you to feel the problem is just the straw that broke the camel’s back.”

    The final conclusion was that people cannot lift other people safely. Assistive machines are needed, and as the article points out, hospitals do not have them.