All posts by Anne McLaughlin

Associate Professor, Department of Psychology, North Carolina State University, Raleigh, NC

“All Too Human” Now Available

I remember the day I discovered human factors. I took the course as an elective, because I’d already taken every other psychology class available to me.  At the time, I knew I loved psychology but also knew I didn’t want to be a clinician or a counselor. The first day, we were assigned a typical “bad design” project – find a frustrating object in the world and write about what was wrong with it. The hardest part of the assignment was limiting myself to one bad design – I had a grievance list pages long. I knew then that I had found a field that matched my desire to ‘fix’ the world.

So many of us who work in human factors had a similar epiphany, and yet our field is virtually unknown to the public. I wanted to change that by writing the kind of non-fiction book I like to read: jargon-free content augmented by stories from history, the news, and the author’s own experiences.

This month my book was published by Cambridge University Press:  All Too Human. The intended audience spans the general public, students, and practitioners in related fields like engineering and computer science.

The Process

I began this project over three years ago. For my cognitive class (undergraduate), I wrote chapters to show the application of basic processes: perception, attention, memory. For my human factors psychology course (graduate), I wanted to tell stories that paired with the primary source readings and spurred discussion.

Each semester I user-tested the chapters by having students write reaction papers, and then iterating on the text to make sure I met my three goals: 1) jargon-free and accessible writing, and 2) conveyed the message that to build our world, we must understand the capabilities and limitations of the human mind, and 3) there’s a profession that does just that: human factors psychology. I edited the chapters repeatedly to make them approachable with fun figures and end-of-book notes rather than in-text citations (to keep the text clean and easy to read.)

The Audiences

For educators: At the undergraduate or graduate level, I wrote this book to be a companion to text and lecture. I often use companion texts in my own courses, such as The Design of Everyday Things or Set Phasers on Stun, to grab students’ attention and generate discussion. Email collegesales@cambridge.org to ask for an examination copy.

For practitioners: I envisioned this book as one you could recommend to co-workers to give them a fun overview of “what we do” and what we can do. I want it to inspire multi-disciplinary conversations. I want others to understand where our recommendations come from.

For non-fiction lovers: This book has a message for you. It is that we can control the design of our human world. Products and systems should work for us, offer us pleasure, efficiency, and safety. But for that to happen, we must demand it. This book will give you the background to help you understand just how much control we deserve, and in how many ways we can influence companies, governments, and ourselves to make a better world.

Where to Buy

Available from all bookstores, including:
Amazon
Barnes & Noble
Cambridge University Press

Excerpt from the chapter Needles in Haystacks, on signal detection theory:

Sensitive and Specific

In August of 2020 Magawa received the PDSA Gold Medal for bravery and devotion to duty. A native of Tanzania, he was described as brave, friendly, and a determined worker. Over the course of four years, he found thirty-nine buried landmines in Cambodia. In a country with millions of mines there is still a ways to go, but experts like Magawa are on the front lines of mine detection. Also, Magawa is a rat.

A great attribute of mine-seeking rats is their weight. Although this particular species, African giant pouched rats, are very large for rats, they are still too light to trigger a landmine. They are also smart, have noses at least as discerning as the eyesight of pigeons, and are easy to train. Their training is a real rat race – they work about five hours each weekday on learning to discriminate the smell of a landmine from other scents, all for some mashed banana. They indicate finding a mine by stopping and digging with their little paws.

Magawa was trained by the APOPO, a Belgian non-profit with a mouthful of a name (the Anti-Persoonsmijnen Ontmijnende Product Ontwikkeling, which translates to “Anti-Personnel Landmines Removal Product Development”). The rats’ training is extensive and their success is measured in hits, misses, false alarms, and correct rejections, with more available data than the Project SEA HUNT pigeons. Why not just use a metal detector? The answer fits the theme of this chapter: the high cost of false alarms. The ground is full of nonexplosive metal bits and trash and one estimate is that there are 1,000 false alarms per mine found using just detectors. The rats, meanwhile, can only graduate by demonstrating a 100 percent hit rate on four buried mines and no more than one false alarm. The rats have to be both sensitive and specific. The sensitivity rate is calculated as the number of mines they identified in an area created by their handlers divided by the number of total mines in that area. Their specificity is the number of false alarms per 100 square meters – a slightly different calculation than for other signal detection where specificity is the probability of a false alarm. It may be too much to ask of any single rat to be such a perfect performer, so typically every area is searched by a few rats and their combined scores make the final sensitivity and specificity rates for the team.

In a study led by psychologist Alan Poling, a professor at Western Michigan University, the trained rats were taken to Mozambique where they explored 93,400 square kilometers of mine-infested area. The rats found 41 mines. Humans used other tools to check the area as well, like metal detectors, but found no additional mines. The rats succeeded with extreme sensitivity (100 percent hits and zero misses). But what about their specificity? In this real-world test, they had about 0.33 false alarms for every 100 square meters searched (617 false alarms), or about 39,383 fewer than would be expected from a metal-detecting human. My math is undoubtedly a brash generalization, as the mines aren’t distributed equally and differ by country, but in terms of both hits and false alarms the rats are clear winners.

Infovis in the coronavirus era

This article from The Conversation is a great starting point for education in information visualization, the challenges in creating clear graphs, and what an information consumer can do to understand what they are seeing.

The author is in the Geosciences, so it’s not surprising that her best example comes from that domain. The map at the top of this post flips the colors of the “landslide” areas in a way that makes it clear which areas are susceptible.

It’s like having 10 different remote controls for 10 different TVs

This NPR interview with Danielle Ofri, author of a new book on medical errors (and their prevention), had some interesting insight into how human factors play out during a pandemic.

Her new book is “When We Do Harm,” and I was most interested in these excerpts from the interview:

“…we got many donated ventilators. Many hospitals got that, and we needed them. … But it’s like having 10 different remote controls for 10 different TVs. It takes some time to figure that out. And we definitely saw things go wrong as people struggled to figure out how this remote control works from that one.”

“We had many patients being transferred from overloaded hospitals. And when patients come in a batch of 10 or 20, 30, 40, it is really a setup for things going wrong. So you have to be extremely careful in keeping the patients distinguished. We have to have a system set up to accept the transfers … [and] take the time to carefully sort patients out, especially if every patient comes with the same diagnosis, it is easy to mix patients up.”

And my favorite, even though it isn’t necessarily COVID-19 related:

“For example, … [with] a patient with diabetes … it won’t let me just put “diabetes.” It has to pick out one of the 50 possible variations of on- or off- insulin — with kidney problems, with neurologic problems and to what degree, in what stage — which are important, but I know that it’s there for billing. And each time I’m about to write about it, these 25 different things pop up and I have to address them right now. But of course, I’m not thinking about the billing diagnosis. I want to think about the diabetes. But this gets in the way of my train of thought. And it distracts me. And so I lose what I’m doing if I have to attend to these many things. And that’s really kind of the theme of medical records in the electronic form is that they’re made to be simple for billing and they’re not as logical, or they don’t think in the same logical way that clinicians do.”

Workshop on Designing for Older Adults

The CREATE group, authors of the book Designing for Older Adults, is holding a workshop:

CREATE (www.create-center.org ) is offering one on Designing for Older Adults.  The workshop will present guidelines and best practices for designing for older adults. Topics include: Existing & Emerging Technologies, Usability Protocols, Interface & Instructional Design, and design for Social Engagement, Living Environments, Healthcare, Transportation, Leisure, and Work. Each participant will receive a complimentary copy of CREATE’s book Designing for Older Adults, 3rd Ed., winner of the Richard Kalish Innovative Publication Award (2019), and a USB with CREATE publications and tools.

Additional information on the workshop can be found below.

We wrote a book in the series, Designing Displays for Older Adults, which we are currently revising. It will be available in early 2020.

Listening to the End User: NHL/NHLPA Collaboration with Hockey Goalies

Today we present a guest post by Ragan Wilson, PhD student in Human Factors and Applied Cognitive Psychology at NC State University.

Saying that goalies in professional ice hockey see the puck a lot is an understatement. They are the last line of defense for their team against scoring, putting their bodies in the way of the puck to block shots in ways that sometimes do not seem human. In order to do that, they rely on their skills as well as their protective equipment, including chest protectors. As written by In Goal Magazine’s Kevin Woodley and Greg Balloch, at the professional level this and other equipment is being re-examined by the National Hockey League (NHL) and the National Hockey League’s Player’s Association (NHLPA).

For the 2018-2019 NHL season, there has been a change in goal-tending equipment rules involving chest protectors according to NHL’s columnist Nicholas J. Cotsonika. This rule, Rule 11.3, states that “The chest and arm protector worn by each goalkeeper must be anatomically proportional and size-specific based on the individual physical characteristics of that goalkeeper”. In practical terms, what this rule means is that goaltender chest protection needs to be size-wise in proportion to the goaltender using it so, for instance, a 185-pound goalie would seem more like a 185-pound goalie versus a 200-210 pound goalie. The reasoning for the rule change was to try to make saves by the goalie more based on ability than on extra padding and to potentially increase scoring in the league. Overall, this is a continuation of a mission for both the NHL and NHLPA to make goalie equipment slimmer, which was kick-started by changes in goalie pants and leg pads. The difference between previously approved chest protectors and the approved models are shown below thanks to the website Goalie Coaches who labeled images from Brian’s Custom Sports Instagram page below.



To a non-hockey player, the visual differences between non-NHL approved and the NHL approved pads look minuscule. However, according to In Goal Magazine, implementing these changes have been an interesting challenge for the NHL as well as hockey gear companies such as Brian’s and CCM). Whereas changing the pants rule was more straightforward, the dimensions of chest protectors are more complicated and personal to goalies (NHL). This challenge could be seen earlier in the season with mixed feedback about the new gear change. Some current NHL such as Vegas Golden Knights’ Marc-Andre Fleury (In Goal Magazine) and Winnipeg Jets’ Connor Hellebuyck (Sports Illustrated) noted more pain from blocking pucks in the upper body region. On the other hand, the Toronto Maple Leafs’ Frederik Andersen and Garrett Sparks have not had problems with these changes (Sports Illustrated).

What always makes me happy as a student of human factors psychology is when final users are made an active part of the discussion for changes. Thankfully, that is what appears to be happening so far with this rule change since the NHL and NHLPA seemed to be actively interested in and considering feedback from current NHL goaltenders about what could make them more comfortable with the new equipment standards at the beginning of the season (In Goal Magazine). Hopefully, that continues into the next season with all the rigorous, real-life testing that a season’s worth of regular and playoff games can provide. Considering there are already some interesting, individualized adjustments to the new equipment rules such as changing companies (Washington Capitals’ Braden Holtby), or adding another layer of protection such as a padded undershirt (Marc-Andre Fleury) (USA Today), it’ll be interesting what the situation is for this equipment come the next off-season, especially in terms of innovation from the companies that produce this gear at a professional level.


Ragan Wilson is a first-year human factors and applied cognitive psychology doctoral student at NC State University. She is mainly interested in the ways that human factors and all areas of sports can be interlinked, from player safety to consumer experiences of live action games.

The current need for enforcement of safety regulations

An NPR article reports on safety violations in Kentucky:

In December 2016, Pius “Gene” Hobbs was raking gravel with the Meade County public works crew when a dump truck backed over him. The driver then accelerated forward, hitting him a second time. Hobbs was crushed to death.

The sole eyewitness to the incident said that the dump truck’s backup beeper wasn’t audible at the noisy worksite. The Kentucky State Police trooper on the scene concurred. Hobbs might not have been able to hear the truck coming.

But when Kentucky Occupational Safety and Health arrived, hours later, the inspector tested the beeper on a quiet street and said it wasn’t a problem.

“These shortcomings are very concerning,” says Jordan Barab, a workplace safety expert who served as Deputy Assistant Secretary of Labor for Occupational Safety and Health under President Barack Obama. “Identifying the causes of these incidents is … vitally important.” Otherwise, the employer doesn’t know how to avoid the next incident, he says.

Gene Hobbs’ case is not the exception. In fact, it’s the norm, according to a recent federal audit.

Kentucky is what’s known as a “state plan,” meaning the federal Occupational Safety and Health Administration has authorized it to run its own worker safety program.

Every year, federal OSHA conducts an audit of all 28 state plans to ensure they are “at least as effective” as the federal agency at identifying and preventing workplace hazards.

According to this year’s audit of Kentucky, which covered fiscal year 2017, KY OSH is not meeting that standard. In fact, federal OSHA identified more shortcomings in Kentucky’s program than any other state.

We know that we must have regulations and enforcement of those regulations to have safe environments. Left to our own choices, people tend to choose what appears to be the fastest and easiest options, not the most safe ones. For an interesting read on the history of safety regulation, see this article from the Department of Labor.

In 1898 the Wisconsin bureau reported that it was often difficult to find safety devices that did not reduce efficiency. Sanitary improvements and fire escapes were expensive, which led many employers to resist their adoption. Constant pressure and attention were needed to obtain compliance. Employers objected to the posting of laws in their establishments and some tore them down. The proprietor of a shoe factory with very poor fire escape routes showed “a disposition to defeat” an inspector’s request for more fire escapes, though he complied in the end. A cloak maker who was also found to have inadequate fire escapes went to the extreme of relocating his operation to avoid compliance. Such delays were not uncommon.

When an inspector found abominable conditions in the dipping rooms of a match factory — poorly ventilated rooms filled with poisonous fumes from the liquid phosphorus which made up the match heads — he tried to persuade the operators to make improvements. They objected because of the costs involved and the inspector “left without expecting to see the changes made.” When a machinery manufacturer equipped his ripsaws with guards after an inspection, a reinspection revealed that the employees had removed the guards.

Without regulation, we’ll be back to 1898 in short order.

Lion Air Crash from October 2018

From CNN:

The passengers on the Lion Air 610 flight were on board one of Boeing’s newest, most advanced planes. The pilot and co-pilot of the 737 MAX 8 were more than experienced, with around 11,000 flying hours between them. The weather conditions were not an issue and the flight was routine. So what caused that plane to crash into the Java Sea just 13 minutes after takeoff?

I’ve been waiting for updated information on the Lion Air crash before posting details. When I first read about the accident it struck me as a collection of human factors safety violations in design. I’ve pulled together some of the news reports on the crash, organized by the types of problems experienced on the airplane.

1. “a cacophony of warnings”
Fortune Magazine reported on the number of warnings and alarms that began to sound as soon as the plane took flight. These same alarms occurred on its previous flight and there is some blaming of the victims here when they ask “If a previous crew was able to handle it, why not this one?”

The alerts included a so-called stick shaker — a loud device that makes a thumping noise and vibrates the control column to warn pilots they’re in danger of losing lift on the wings — and instruments that registered different readings for the captain and copilot, according to data presented to a panel of lawmakers in Jakarta Thursday.

2. New automation features, no training
The plane included new “anti-stall” technology that the airlines say was not explained well nor included in Boeing training materials.

In the past week, Boeing has stepped up its response by pushing back on suggestions that the company could have better alerted its customers to the jet’s new anti-stall feature. The three largest U.S. pilot unions and Lion Air’s operations director, Zwingly Silalahi, have expressed concern over what they said was a lack of information.

As was previously revealed by investigators, the plane’s angle-of-attack sensor on the captain’s side was providing dramatically different readings than the same device feeding the copilot’s instruments.

Angle of attack registers whether the plane’s nose is pointed above or below the oncoming air flow. A reading showing the nose is too high could signal a dangerous stall and the captain’s sensor was indicating more than 20 degrees higher than its counterpart. The stick shaker was activated on the captain’s side of the plane, but not the copilot’s, according to the data.

And more from CNN:

“Generally speaking, when there is a new delivery of aircraft — even though they are the same family — airline operators are required to send their pilots for training,” Bijan Vasigh, professor of economics and finance at Embry-Riddle Aeronautical University, told CNN.

Those training sessions generally take only a few days, but they give the pilots time to familiarize themselves with any new features or changes to the system, Vasigh said.
One of the MAX 8’s new features is an anti-stalling device, the maneuvering characteristics augmentation system (MCAS). If the MCAS detects that the plane is flying too slowly or steeply, and at risk of stalling, it can automatically lower the airplane’s nose.

It’s meant to be a safety mechanism. But the problem, according to Lion Air and a growing chorus of international pilots, was that no one knew about that system. Zwingli Silalahi, Lion Air’s operational director, said that Boeing did not suggest additional training for pilots operating the 737 MAX 8. “We didn’t receive any information from Boeing or from regulator about that additional training for our pilots,” Zwingli told CNN Wednesday.

“We don’t have that in the manual of the Boeing 737 MAX 8. That’s why we don’t have the special training for that specific situation,” he said.

Human Factors and the Ballot Box

New NPR story on the non-usability of ballots, voting software, and other factors affecting our elections:

New York City’s voters were subject to a series of setbacks after the election board unrolled a perforated two-page ballot. Voters who didn’t know they had to tear at the edges to get at the entire ballot ended up skipping the middle pages. Then the fat ballots jammed the scanners, long lines formed, and people’s ballots got soaked in the rain. When voters fed the soggy ballots into scanners, more machines malfunctioned.

In Georgia, hundreds blundered on their absentee ballot, incorrectly filling out the birth date section. Counties originally threw out the ballots before a federal judge ordered they be counted.

And in Broward County, Fla., 30,000 people who voted for governor skipped the contest for U.S. Senate. The county’s election board had placed that contest under a block of multi-lingual instructions, which ran halfway down the page. Quesenbery says voters scanning the instructions likely skimmed right over the race.

She has seen this design before. In 2009, King County, Wash., buried a tax initiative under a text-heavy column of instructions. An estimated 40,000 voters ended up missing the contest, leading the state to pass a bill mandating ballot directions look significantly different from the contests below.

“We know the answers,” says Quesenbery. “I wish we were making new mistakes, not making the same old mistakes.”

The story didn’t even mention the issues with the “butterfly ballot” from Florida in 2000. Whitney Queensbery is right. We do know the answers, and we certainly know the methods for getting the answers. We need the will to apply them in our civics, not just commercial industry.

Hawaii False Alarm: The story that keeps on giving

Right after the Hawaii false nuclear alarm, I posted about how the user interface seemed to contribute to the error. At the time, sources were reporting it as a “dropdown” menu. Well, that wasn’t exactly true, but in the last few weeks it’s become clear that truth is stranger than fiction. Here is a run-down of the news on the story (spoiler, every step is a human factors-related issue):

  • Hawaii nuclear attack alarms are sounded, also sending alerts to cell phones across the state
  • Alarm is noted as false and the state struggles to get that message out to the panicked public
  • Error is blamed on a confusing drop-down interface: “From a drop-down menu on a computer program, he saw two options: “Test missile alert” and “Missile alert.”
  • The actual interface is found and shown – rather than a drop-down menu it’s just closely clustered links on a 1990s-era website-looking interface that say “DRILL-PACOM(CDW)-STATE ONLY” and “PACOM(CDW)-STATE ONLY”
  • It comes to light that part of the reason the wrong alert stood for 38 minutes was because the Governor didn’t remember his twitter login and password
  • Latest news: the employee who sounded the alarm says it wasn’t an error, he heard this was “not a drill” and acted accordingly to trigger the real alarm

The now-fired employee has spoken up, saying he was sure of his actions and “did what I was trained to do.” When asked what he’d do differently, he said “nothing,” because everything he saw and heard at the time made him think this was not a drill. His firing is clearly an attempt by Hawaii to get rid of a ‘bad apple.’ Problem solved?

It seems like a good time for my favorite reminder from Sidney Dekker’s book, “The Field Guide to Human Error Investigations” (abridged):

To protect safe systems from the vagaries of human behavior, recommendations typically propose to:

    • Tighten procedures and close regulatory gaps. This reduces the bandwidth in which people operate. It leaves less room for error.
    • Introduce more technology to monitor or replace human work. If machines do the work, then humans can no longer make errors doing it. And if machines monitor human work, they ca
    snuff out any erratic human behavior.
    • Make sure that defective practitioners (the bad apples) do not contribute to system breakdown again. Put them on “administrative leave”; demote them to a lower status; educate or pressure them to behave better next time; instill some fear in them and their peers by taking them to court or reprimanding them.

In this view of human error, investigations can safely conclude with the label “human error”—by whatever name (for example: ignoring a warning light, violating a procedure). Such a conclusion and its implications supposedly get to the causes of system failure.

AN ILLUSION OF PROGRESS ON SAFETY
The shortcomings of the bad apple theory are severe and deep. Progress on safety based on this view is often a short-lived illusion. For example, focusing on individual failures does not take away the underlying problem. Removing “defective” practitioners (throwing out the bad apples) fails to remove the potential for the errors they made.

…[T]rying to change your people by setting examples, or changing the make-up of your operational workforce by removing bad apples, has little long-term effect if the basic conditions that people work under are left unamended.

A ‘bad apple’ is often just a scapegoat that makes people feel better by giving a focus for blame. Real improvements and safety happen by improving the system, not by getting rid of employees who were forced to work within a problematic system.

‘Mom, are we going to die today? Why won’t you answer me?’ – False Nuclear Alarm in Hawaii Due to User Interface


Image from the New York Times

The morning of January 13th, people in Hawaii received a false alarm that the island was under nuclear attack. One of the messages people received was via cell phones and it said:“BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” Today, the Washington Post reported that the alarm was due to an employee pushing the “wrong button” when trying to test the nuclear alarm system.

The quote in the title of this post is from another Washington Post article where people experiencing the alarm were interviewed.

To sum up the issue, the alarm is triggered by choosing an option in a drop down menu, which had options for “Test missile alert” and “Missile alert.” The employee chose the wrong dropdown and, once chosen, the system had no way to reverse the alarm.

A nuclear alarm system should be subjected to particularly high usability requirements, but this system didn’t even conform to Nielson’s 10 heuristics. It violates:

  • User control and freedom: Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  • Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  • Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
  • Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  • And those are just the ones I could identify from reading the Washington Post article! Perhaps a human factors analysis will become regulated for these systems as it has been for the FDA and medical devices.