Category Archives: safety

Listening to the End User: NHL/NHLPA Collaboration with Hockey Goalies

Today we present a guest post by Ragan Wilson, PhD student in Human Factors and Applied Cognitive Psychology at NC State University.

Saying that goalies in professional ice hockey see the puck a lot is an understatement. They are the last line of defense for their team against scoring, putting their bodies in the way of the puck to block shots in ways that sometimes do not seem human. In order to do that, they rely on their skills as well as their protective equipment, including chest protectors. As written by In Goal Magazine’s Kevin Woodley and Greg Balloch, at the professional level this and other equipment is being re-examined by the National Hockey League (NHL) and the National Hockey League’s Player’s Association (NHLPA).

For the 2018-2019 NHL season, there has been a change in goal-tending equipment rules involving chest protectors according to NHL’s columnist Nicholas J. Cotsonika. This rule, Rule 11.3, states that “The chest and arm protector worn by each goalkeeper must be anatomically proportional and size-specific based on the individual physical characteristics of that goalkeeper”. In practical terms, what this rule means is that goaltender chest protection needs to be size-wise in proportion to the goaltender using it so, for instance, a 185-pound goalie would seem more like a 185-pound goalie versus a 200-210 pound goalie. The reasoning for the rule change was to try to make saves by the goalie more based on ability than on extra padding and to potentially increase scoring in the league. Overall, this is a continuation of a mission for both the NHL and NHLPA to make goalie equipment slimmer, which was kick-started by changes in goalie pants and leg pads. The difference between previously approved chest protectors and the approved models are shown below thanks to the website Goalie Coaches who labeled images from Brian’s Custom Sports Instagram page below.



To a non-hockey player, the visual differences between non-NHL approved and the NHL approved pads look minuscule. However, according to In Goal Magazine, implementing these changes have been an interesting challenge for the NHL as well as hockey gear companies such as Brian’s and CCM). Whereas changing the pants rule was more straightforward, the dimensions of chest protectors are more complicated and personal to goalies (NHL). This challenge could be seen earlier in the season with mixed feedback about the new gear change. Some current NHL such as Vegas Golden Knights’ Marc-Andre Fleury (In Goal Magazine) and Winnipeg Jets’ Connor Hellebuyck (Sports Illustrated) noted more pain from blocking pucks in the upper body region. On the other hand, the Toronto Maple Leafs’ Frederik Andersen and Garrett Sparks have not had problems with these changes (Sports Illustrated).

What always makes me happy as a student of human factors psychology is when final users are made an active part of the discussion for changes. Thankfully, that is what appears to be happening so far with this rule change since the NHL and NHLPA seemed to be actively interested in and considering feedback from current NHL goaltenders about what could make them more comfortable with the new equipment standards at the beginning of the season (In Goal Magazine). Hopefully, that continues into the next season with all the rigorous, real-life testing that a season’s worth of regular and playoff games can provide. Considering there are already some interesting, individualized adjustments to the new equipment rules such as changing companies (Washington Capitals’ Braden Holtby), or adding another layer of protection such as a padded undershirt (Marc-Andre Fleury) (USA Today), it’ll be interesting what the situation is for this equipment come the next off-season, especially in terms of innovation from the companies that produce this gear at a professional level.


Ragan Wilson is a first-year human factors and applied cognitive psychology doctoral student at NC State University. She is mainly interested in the ways that human factors and all areas of sports can be interlinked, from player safety to consumer experiences of live action games.

The current need for enforcement of safety regulations

An NPR article reports on safety violations in Kentucky:

In December 2016, Pius “Gene” Hobbs was raking gravel with the Meade County public works crew when a dump truck backed over him. The driver then accelerated forward, hitting him a second time. Hobbs was crushed to death.

The sole eyewitness to the incident said that the dump truck’s backup beeper wasn’t audible at the noisy worksite. The Kentucky State Police trooper on the scene concurred. Hobbs might not have been able to hear the truck coming.

But when Kentucky Occupational Safety and Health arrived, hours later, the inspector tested the beeper on a quiet street and said it wasn’t a problem.

“These shortcomings are very concerning,” says Jordan Barab, a workplace safety expert who served as Deputy Assistant Secretary of Labor for Occupational Safety and Health under President Barack Obama. “Identifying the causes of these incidents is … vitally important.” Otherwise, the employer doesn’t know how to avoid the next incident, he says.

Gene Hobbs’ case is not the exception. In fact, it’s the norm, according to a recent federal audit.

Kentucky is what’s known as a “state plan,” meaning the federal Occupational Safety and Health Administration has authorized it to run its own worker safety program.

Every year, federal OSHA conducts an audit of all 28 state plans to ensure they are “at least as effective” as the federal agency at identifying and preventing workplace hazards.

According to this year’s audit of Kentucky, which covered fiscal year 2017, KY OSH is not meeting that standard. In fact, federal OSHA identified more shortcomings in Kentucky’s program than any other state.

We know that we must have regulations and enforcement of those regulations to have safe environments. Left to our own choices, people tend to choose what appears to be the fastest and easiest options, not the most safe ones. For an interesting read on the history of safety regulation, see this article from the Department of Labor.

In 1898 the Wisconsin bureau reported that it was often difficult to find safety devices that did not reduce efficiency. Sanitary improvements and fire escapes were expensive, which led many employers to resist their adoption. Constant pressure and attention were needed to obtain compliance. Employers objected to the posting of laws in their establishments and some tore them down. The proprietor of a shoe factory with very poor fire escape routes showed “a disposition to defeat” an inspector’s request for more fire escapes, though he complied in the end. A cloak maker who was also found to have inadequate fire escapes went to the extreme of relocating his operation to avoid compliance. Such delays were not uncommon.

When an inspector found abominable conditions in the dipping rooms of a match factory — poorly ventilated rooms filled with poisonous fumes from the liquid phosphorus which made up the match heads — he tried to persuade the operators to make improvements. They objected because of the costs involved and the inspector “left without expecting to see the changes made.” When a machinery manufacturer equipped his ripsaws with guards after an inspection, a reinspection revealed that the employees had removed the guards.

Without regulation, we’ll be back to 1898 in short order.

Lion Air Crash from October 2018

From CNN:

The passengers on the Lion Air 610 flight were on board one of Boeing’s newest, most advanced planes. The pilot and co-pilot of the 737 MAX 8 were more than experienced, with around 11,000 flying hours between them. The weather conditions were not an issue and the flight was routine. So what caused that plane to crash into the Java Sea just 13 minutes after takeoff?

I’ve been waiting for updated information on the Lion Air crash before posting details. When I first read about the accident it struck me as a collection of human factors safety violations in design. I’ve pulled together some of the news reports on the crash, organized by the types of problems experienced on the airplane.

1. “a cacophony of warnings”
Fortune Magazine reported on the number of warnings and alarms that began to sound as soon as the plane took flight. These same alarms occurred on its previous flight and there is some blaming of the victims here when they ask “If a previous crew was able to handle it, why not this one?”

The alerts included a so-called stick shaker — a loud device that makes a thumping noise and vibrates the control column to warn pilots they’re in danger of losing lift on the wings — and instruments that registered different readings for the captain and copilot, according to data presented to a panel of lawmakers in Jakarta Thursday.

2. New automation features, no training
The plane included new “anti-stall” technology that the airlines say was not explained well nor included in Boeing training materials.

In the past week, Boeing has stepped up its response by pushing back on suggestions that the company could have better alerted its customers to the jet’s new anti-stall feature. The three largest U.S. pilot unions and Lion Air’s operations director, Zwingly Silalahi, have expressed concern over what they said was a lack of information.

As was previously revealed by investigators, the plane’s angle-of-attack sensor on the captain’s side was providing dramatically different readings than the same device feeding the copilot’s instruments.

Angle of attack registers whether the plane’s nose is pointed above or below the oncoming air flow. A reading showing the nose is too high could signal a dangerous stall and the captain’s sensor was indicating more than 20 degrees higher than its counterpart. The stick shaker was activated on the captain’s side of the plane, but not the copilot’s, according to the data.

And more from CNN:

“Generally speaking, when there is a new delivery of aircraft — even though they are the same family — airline operators are required to send their pilots for training,” Bijan Vasigh, professor of economics and finance at Embry-Riddle Aeronautical University, told CNN.

Those training sessions generally take only a few days, but they give the pilots time to familiarize themselves with any new features or changes to the system, Vasigh said.
One of the MAX 8’s new features is an anti-stalling device, the maneuvering characteristics augmentation system (MCAS). If the MCAS detects that the plane is flying too slowly or steeply, and at risk of stalling, it can automatically lower the airplane’s nose.

It’s meant to be a safety mechanism. But the problem, according to Lion Air and a growing chorus of international pilots, was that no one knew about that system. Zwingli Silalahi, Lion Air’s operational director, said that Boeing did not suggest additional training for pilots operating the 737 MAX 8. “We didn’t receive any information from Boeing or from regulator about that additional training for our pilots,” Zwingli told CNN Wednesday.

“We don’t have that in the manual of the Boeing 737 MAX 8. That’s why we don’t have the special training for that specific situation,” he said.

Hawaii False Alarm: The story that keeps on giving

Right after the Hawaii false nuclear alarm, I posted about how the user interface seemed to contribute to the error. At the time, sources were reporting it as a “dropdown” menu. Well, that wasn’t exactly true, but in the last few weeks it’s become clear that truth is stranger than fiction. Here is a run-down of the news on the story (spoiler, every step is a human factors-related issue):

  • Hawaii nuclear attack alarms are sounded, also sending alerts to cell phones across the state
  • Alarm is noted as false and the state struggles to get that message out to the panicked public
  • Error is blamed on a confusing drop-down interface: “From a drop-down menu on a computer program, he saw two options: “Test missile alert” and “Missile alert.”
  • The actual interface is found and shown – rather than a drop-down menu it’s just closely clustered links on a 1990s-era website-looking interface that say “DRILL-PACOM(CDW)-STATE ONLY” and “PACOM(CDW)-STATE ONLY”
  • It comes to light that part of the reason the wrong alert stood for 38 minutes was because the Governor didn’t remember his twitter login and password
  • Latest news: the employee who sounded the alarm says it wasn’t an error, he heard this was “not a drill” and acted accordingly to trigger the real alarm

The now-fired employee has spoken up, saying he was sure of his actions and “did what I was trained to do.” When asked what he’d do differently, he said “nothing,” because everything he saw and heard at the time made him think this was not a drill. His firing is clearly an attempt by Hawaii to get rid of a ‘bad apple.’ Problem solved?

It seems like a good time for my favorite reminder from Sidney Dekker’s book, “The Field Guide to Human Error Investigations” (abridged):

To protect safe systems from the vagaries of human behavior, recommendations typically propose to:

    • Tighten procedures and close regulatory gaps. This reduces the bandwidth in which people operate. It leaves less room for error.
    • Introduce more technology to monitor or replace human work. If machines do the work, then humans can no longer make errors doing it. And if machines monitor human work, they ca
    snuff out any erratic human behavior.
    • Make sure that defective practitioners (the bad apples) do not contribute to system breakdown again. Put them on “administrative leave”; demote them to a lower status; educate or pressure them to behave better next time; instill some fear in them and their peers by taking them to court or reprimanding them.

In this view of human error, investigations can safely conclude with the label “human error”—by whatever name (for example: ignoring a warning light, violating a procedure). Such a conclusion and its implications supposedly get to the causes of system failure.

AN ILLUSION OF PROGRESS ON SAFETY
The shortcomings of the bad apple theory are severe and deep. Progress on safety based on this view is often a short-lived illusion. For example, focusing on individual failures does not take away the underlying problem. Removing “defective” practitioners (throwing out the bad apples) fails to remove the potential for the errors they made.

…[T]rying to change your people by setting examples, or changing the make-up of your operational workforce by removing bad apples, has little long-term effect if the basic conditions that people work under are left unamended.

A ‘bad apple’ is often just a scapegoat that makes people feel better by giving a focus for blame. Real improvements and safety happen by improving the system, not by getting rid of employees who were forced to work within a problematic system.

Outside Magazine profile’s Anne’s rock climbing & human factors research

Anne’s research on attention and rock climbing was recently featured in an article in Outside Magazine:

To trad climb is to be faced with hundreds of such split-second micro decisions, the consequences of which can be fatal. That emphasis on human judgment and its fallibility intrigued Anne McLaughlin, a psychology professor at North Carolina State University. An attention and behavior researcher, she set out to model how and why rock climbers make decisions, and she’d recruited Weil and 31 other trad climbers to contribute data to the project.

The idea for the study first came about at the crag. In 2011, McLaughlin, Chris Wickens, a psychology professor at Colorado State University, and John Keller, an engineer at Alion Science and Technology, converged in Las Vegas for the Human Factors and Ergonomics Society conference, an annual event that brings together various professionals practicing user-focused product design. With Red Rocks just a few minutes away, the three avid climbers were eager to get some time on the rock before the day’s sessions, says Keller, even if it meant starting at 3 a.m.

Tesla counterpoint: “40% reduction in crashes” with introduction of Autosteer

I posted yesterday about the challenges of fully autonomous cars and cars that approach autonomy. Today I bring you a story about the successes of semi-automatic features in automobiles.

Tesla has a feature called Autopilot that assists the driver without being completely autonomous. Autopilot includes car-controlled actions such as collision warnings, automatic emergency braking, and automatic lane keeping. Tesla classifies the Autopilot features as Level 2 automation. (Level 5 is considered fully autonomous). Rich has already given our thoughts about calling this Autopilot in a previous post. One particular feature is called AutoSteer, described in the NHTSA report as:

The Tesla Autosteer system uses information from the forward-looking camera, the radar sensor, and the ultrasonic sensors, to detect lane markings and the presence of vehicles and objects to provide automated lane-centering steering control based on the lane markings and the vehicle directly in front of the Tesla, if present. The Tesla owner’s manual contains the following warnings: 1) “Autosteer is intended for use only on highways and limited-access roads with a fully attentive driver. When using Autosteer, hold the steering wheel and be mindful of road conditions and surrounding traffic. Do not use Autosteer on city streets, in construction zones, or in areas where bicyclists or pedestrians may be present. Never depend on Autosteer to determine an appropriate driving path. Always be prepared to take immediate action. Failure to follow these instructions could cause serious property damage, injury or death;” and 2) “Many unforeseen circumstances can impair the operation of Autosteer. Always keep this in mind and remember that as a result, Autosteer may not steer Model S appropriately. Always drive attentively and be prepared to take immediate action.” The system does not prevent operation on any road types.

An NHTSA report looking into a fatal Tesla crash also noted that the introduction of Autosteer corresponded to a 40% reduction in automobile crashes. That’s a lot considering Dr. Gill Pratt from Toyota said he might be happy with a 1% change.

Autopilot was enabled in October, 2015, so there has been a good period of time for post-autopilot crash data to be generated.

Toyota Gets It: Self-driving cars depend more on people than on engineering

I recommend reading this interview with Toyota’s Dr. Gill Pratt in its entirety. He discusses pont-by-point the challenges of a self-driving car that we consider in human factors, but don’t hear much about in the media. For example:

  • Definitions of autonomy vary. True autonomy is far away. He gives the example of a car performing well on an interstate or in light traffic compared to driving through the center of Rome during rush hour.
  • Automation will fail. And the less it fails, the less prepared the driver is to assume control.
  • Emotionally we cannot accept autonomous cars that kill people, even if it reduces overall crash rates and saves lives in the long run.
  • It is difficult to run simulations with the autonomous cars that capture the extreme variability of the human drivers in other cars.

I’ll leave you with the last paragraph in the interview as a summary:

So to sum this thing up, I think there’s a general desire from the technical people in this field to have both the press and particularly the public better educated about what’s really going on. It’s very easy to get misunderstandings based on words like or phrases like “full autonomy.” What does full actually mean? This actually matters a lot: The idea that only the chauffeur mode of autonomy, where the car drives for you, that that’s the only way to make the car safer and to save lives, that’s just false. And it’s important to not say, “We want to save lives therefore we have to have driverless cars.” In particular, there are tremendous numbers of ways to support a human driver and to give them a kind of blunder prevention device which sits there, inactive most of the time, and every once in a while, will first warn and then, if necessary, intervene and take control. The system doesn’t need to be competent at everything all of the time. It needs to only handle the worst cases.

Institutional Memory, Culture, & Disaster

I admit a fascination for reading about disasters. I suppose I’m hoping for the antidote. The little detail that will somehow protect me next time I get into a plane, train, or automobile. A gris-gris for the next time I tie into a climbing rope. Treating my bike helmet as a talisman for my commute. So far, so good.

As human factors psychologists and engineers, we often analyze large scale accidents and look for the reasons (pun intended) that run deeper than a single operator’s error. You can see some of my previous posts on Wiener’s Laws, Ground Proximity Warnings, and the Deep Water Horizon oil spill.

So, I invite you to read this wonderfully detailed blog post by Ron Rapp about how safety culture can slowly derail, “normalizing deviance.”

Bedford and the Normalization of Deviance

He tells the story of a chartered plane crash in Bedford, Massachusetts in 2014, a take-off with so many skipped safety steps and errors that it seemed destined for a crash. There was plenty of time for the pilot stop before the crash, leading Rapp to say “It’s the most inexplicable thing I’ve yet seen a professional pilot do, and I’ve seen a lot of crazy things. If locked flight controls don’t prompt a takeoff abort, nothing will.” He sums up the reasons for these pilot’s “deviant” performance via Diane Vaughn’s factors of normalization (some interpretation on my part, here):

  • If rules and checklists and regulations are difficult, tedious, unusable, or interfere with the goal of the job at hand, they will be misused or ignored.
  • We can’t treat top-down training or continuing education as the only source of information. People pass on shortcuts, tricks, and attitudes to each other.
  • Reward the behaviors you want. But we tend to punish safety behaviors when they delay secondary (but important) goals, such as keeping passengers happy.
  • We can’t ignore the social world of the pilots and crew. Speaking out against “probably” unsafe behaviors is at least as hard as calling out a boss or coworker who makes “probably” racist or sexist comments. The higher the ambiguity, the less likely people take action (“I’m sure he didn’t mean it that way.” or “Well, we skipped that list, but it’s been fine the ten times so far.”)
  • The cure? An interdisciplinary solution coming from human factors psychologists, designers, engineers, and policy makers. That last group might be the most important, in that they recognize a focus on safety is not necessarily more rules and harsher punishments. It’s checking that each piece of the system is efficient, valued, and usable and that those systems work together in an integrated way.

    Thanks to Travis Bowles for the heads-up on this article.
    Feature photo from the NTSB report, photo credit to the Massachusetts Police.

    Human Factors Potpourri

    Some recent items in the news with a human factors angle:

    • What happened to Google Maps?  Interesting comparison of Google Maps from 2010/2016 by designer/cartographer Justin O’Beirne.
    • India will use 3D paintings to slow down drivers.  Excellent use of optical illusions for road safety.
    • Death by GPS.  GPS mis-routing is the easiest and most relatable example of human-automaiton interaction.  Unfortunately, to its detriment, this article does not discuss the automation literature, instead focusing on more basic processes that, I think, are less relevant.

    Wiener’s Laws

    The article “The Human Factor” in Vanity Fair is two years old, but since I can’t believe I missed posting it — here it is! It’s a riveting read with details of the Air France Flight 447 accident and intelligent discussion of the impact automation has on human performance. Dr. Nadine Sarter is interviewed and I learned of a list of flight-specific “laws” developed by Dr. Earl Wiener, a past-president of HFES.

    “Wiener’s Laws,” from the article and from Aviation Week:

    • Every device creates its own opportunity for human error.
    • Exotic devices create exotic problems.
    • Digital devices tune out small errors while creating opportunities for large errors.
    • Invention is the mother of necessity.
    • Some problems have no solution.
    • It takes an airplane to bring out the worst in a pilot.
    • Whenever you solve a problem, you usually create one. You can only hope that the one you created is less critical than the one you eliminated.
    • You can never be too rich or too thin (Duchess of Windsor) or too careful about what you put into a digital flight-guidance system (Wiener).
    • Complacency? Don’t worry about it.
    • In aviation, there is no problem so great or so complex that it cannot be blamed on the pilot.
    • There is no simple solution out there waiting to be discovered, so don’t waste your time searching for it.
    • If at first you don’t succeed… try a new system or a different approach.
    • In God we trust. Everything else must be brought into your scan.
    • It takes an airplane to bring out the worst in a pilot.
    • Any pilot who can be replaced by a computer should be.
    • Today’s nifty, voluntary system is tomorrow’s F.A.R.

    Kudos to the author, William Langewiesche, for a well researched and well written piece.