Category Archives: hci

Hawaii False Alarm: The story that keeps on giving

Right after the Hawaii false nuclear alarm, I posted about how the user interface seemed to contribute to the error. At the time, sources were reporting it as a “dropdown” menu. Well, that wasn’t exactly true, but in the last few weeks it’s become clear that truth is stranger than fiction. Here is a run-down of the news on the story (spoiler, every step is a human factors-related issue):

  • Hawaii nuclear attack alarms are sounded, also sending alerts to cell phones across the state
  • Alarm is noted as false and the state struggles to get that message out to the panicked public
  • Error is blamed on a confusing drop-down interface: “From a drop-down menu on a computer program, he saw two options: “Test missile alert” and “Missile alert.”
  • The actual interface is found and shown – rather than a drop-down menu it’s just closely clustered links on a 1990s-era website-looking interface that say “DRILL-PACOM(CDW)-STATE ONLY” and “PACOM(CDW)-STATE ONLY”
  • It comes to light that part of the reason the wrong alert stood for 38 minutes was because the Governor didn’t remember his twitter login and password
  • Latest news: the employee who sounded the alarm says it wasn’t an error, he heard this was “not a drill” and acted accordingly to trigger the real alarm

The now-fired employee has spoken up, saying he was sure of his actions and “did what I was trained to do.” When asked what he’d do differently, he said “nothing,” because everything he saw and heard at the time made him think this was not a drill. His firing is clearly an attempt by Hawaii to get rid of a ‘bad apple.’ Problem solved?

It seems like a good time for my favorite reminder from Sidney Dekker’s book, “The Field Guide to Human Error Investigations” (abridged):

To protect safe systems from the vagaries of human behavior, recommendations typically propose to:

    • Tighten procedures and close regulatory gaps. This reduces the bandwidth in which people operate. It leaves less room for error.
    • Introduce more technology to monitor or replace human work. If machines do the work, then humans can no longer make errors doing it. And if machines monitor human work, they ca
    snuff out any erratic human behavior.
    • Make sure that defective practitioners (the bad apples) do not contribute to system breakdown again. Put them on “administrative leave”; demote them to a lower status; educate or pressure them to behave better next time; instill some fear in them and their peers by taking them to court or reprimanding them.

In this view of human error, investigations can safely conclude with the label “human error”—by whatever name (for example: ignoring a warning light, violating a procedure). Such a conclusion and its implications supposedly get to the causes of system failure.

AN ILLUSION OF PROGRESS ON SAFETY
The shortcomings of the bad apple theory are severe and deep. Progress on safety based on this view is often a short-lived illusion. For example, focusing on individual failures does not take away the underlying problem. Removing “defective” practitioners (throwing out the bad apples) fails to remove the potential for the errors they made.

…[T]rying to change your people by setting examples, or changing the make-up of your operational workforce by removing bad apples, has little long-term effect if the basic conditions that people work under are left unamended.

A ‘bad apple’ is often just a scapegoat that makes people feel better by giving a focus for blame. Real improvements and safety happen by improving the system, not by getting rid of employees who were forced to work within a problematic system.

‘Mom, are we going to die today? Why won’t you answer me?’ – False Nuclear Alarm in Hawaii Due to User Interface


Image from the New York Times

The morning of January 13th, people in Hawaii received a false alarm that the island was under nuclear attack. One of the messages people received was via cell phones and it said:“BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” Today, the Washington Post reported that the alarm was due to an employee pushing the “wrong button” when trying to test the nuclear alarm system.

The quote in the title of this post is from another Washington Post article where people experiencing the alarm were interviewed.

To sum up the issue, the alarm is triggered by choosing an option in a drop down menu, which had options for “Test missile alert” and “Missile alert.” The employee chose the wrong dropdown and, once chosen, the system had no way to reverse the alarm.

A nuclear alarm system should be subjected to particularly high usability requirements, but this system didn’t even conform to Nielson’s 10 heuristics. It violates:

  • User control and freedom: Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  • Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  • Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
  • Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  • And those are just the ones I could identify from reading the Washington Post article! Perhaps a human factors analysis will become regulated for these systems as it has been for the FDA and medical devices.

    Did a User Interface Kill 10 Navy Sailors?

    I chose a provocative title for this post after reading the report on what caused the wreck of the USS John McCain in August of 2017. A summary of the accident is that the USS John McCain was in high-traffic waters when they believed they lost control of steering the ship. Despite attempts to slow or maneuver, it was hit by another large vessel. The bodies of 10 sailors were eventually recovered and five others suffered injury.

    Today marks the final report on the accident released by the Navy. After reading it, it seems to me the report blames the crew. Here are some quotes from the offical Naval report:

    • Loss of situational awareness in response to mistakes in the operation of the JOHN S MCCAIN’s steering and propulsion system, while in the presence of a high density of maritime traffic
    • Failure to follow the International Nautical Rules of the Road, a system of rules to govern the maneuvering of vessels when risk of collision is present
    • Watchstanders operating the JOHN S MCCAIN’s steering and propulsion systems had insufficient proficiency and knowledge of the systems

    And a rather devestating:

    In the Navy, the responsibility of the Commanding Officer for his or her ship is absolute. Many of the decisions made that led to this incident were the result of poor judgment and decision making of the Commanding Officer. That said, no single person bears full responsibility for this incident. The crew was unprepared for the situation in which they found themselves through a lack of preparation, ineffective command and control and deficiencies in training and preparations for navigation.

    Ouch.

    Ars Technica called my attention to an important but not specifically called out reason for the accident: the poor feedback design of the control system. I think it is a problem that the report focused on “failures” of the people involved, not the design of the machines and systems they used. After my reading, I would summarize the reason for the accident as “The ship could be controlled from many locations. This control was transferred using a computer interface. That interface did not give sufficient information about its current state or feedback about what station controlled what functions of the ship. This made the crew think they had lost steering control when actually that control had just been moved to another location.” I based this on information from the report, including:

    Steering was never physically lost. Rather, it had been shifted to a different control station and watchstanders failed to recognize this configuration. Complicating this, the steering control transfer to the Lee Helm caused the rudder to go amidships (centerline). Since the Helmsman had been steering 1-4 degrees of right rudder to maintain course before the transfer, the amidships rudder deviated the ship’s course to the left.

    Even this section calls out the “failure to recognize this configuration.” If the system is designed well, one shouldn’t have to expend any cognitive or physical resources to know from where the ship is being controlled.

    Overall I was surprised at the tone of this report regarding crew performance. Perhaps some is deserved, but without a hard look at the systems the crew use, I don’t have much faith we can avoid future accidents. Fitts and Jones were the start of the human factors field in 1947, when they insisted that the design of the cockpit created accident-prone situations. This went against the beliefs of the times, which was that “pilot error” was the main factor. This ushered in a new era, one where we try to improve the systems people must use as well as their training and decision making. The picture below is of the interface of the USS John S McCain, commissioned in 1994. I would be very interested to see how it appears in action.

    US Navy (USN) Boatswain’s Mate Seaman (BMSN) Charles Holmes mans the helm aboard the USN Arleigh Burke Class Guided Missile Destroyer USS JOHN S. MCCAIN (DDG 56) as the ship gets underway for a Friends and Family Day cruise. The MCCAIN is getting underway for a Friends and Family Day cruise from its homeport at Commander Fleet Activities (CFA) Yokosuka Naval Base (NB), Japan (JPN). Source: Wikimedia Commons

    Designing the technology of ‘Blade Runner 2049’

    The original Bladerunner is my favorite movie and can be credited as sparking my interest in human-technology/human-autonomy interactions.  The sequel is fantastic if you have not seen it (I’ve seen it twice already and soon a third).

    If you’ve seen the original or sequel, the representations of incidental technologies may have seemed unusual.  For example, the technologies feel like a strange hybrid of digital/analog systems, they are mostly voice controlled, and the hardware and software has a well-worn look.  Machines also make satisfying noises as they are working (also present in the sequel).  This is a refreshing contrast to the super clean, touch-based, transparent augmented reality displays shown in other movies.

    This really great post/article from Engadget [WARNING CONTAINS SPOILERS] profiles the company that designed the technology shown in the movie Bladerunner 2049.  I’ve always been fascinated by futuristic UI concepts shown in movies.  What is the interaction like?  Information density? Multi-modal?  Why does it work like that and does it fit in-world?

    The article suggests that the team really thought deeply about how to portray technology and UI by thinking about the fundamentals (I would love to have this job):

    Blade Runner 2049 was challenging because it required Territory to think about complete systems. They were envisioning not only screens, but the machines and parts that would made them work.

    With this in mind, the team considered a range of alternate display technologies. They included e-ink screens, which use tiny microcapsules filled with positive and negatively charged particles, and microfiche sheets, an old analog format used by libraries and other archival institutions to preserve old paper documents.

     

    Thoughtful and Fun Interfaces in the Reykjavik City Museum

    I stopped over in Iceland on the way to a conference and popped in to the Reykjavik City Museum, not knowing what I’d find. I love the idea of technology in a museum, but I’m usually disappointed. Either the concepts are bad, the technology is silly (press a button, light some text), or it just doesn’t work, beaten into submission by armies of 4-year-olds.

    Not at the Settlement Exhibit in Reykjavik. There are two unique interfaces I want to cover, but I’ll start at the beginning with a more typical touchscreen that controlled a larger wall display. As you enter the museum, there are multiple stations for reading pages of the Sagas. These are the stories of their history, from the 9th to 11th centuries, and beautifully illustrated.
    njals_saga_miniature
    They have been scanned, so you can browse the pages (with translations) and not damage them. I didn’t have all day to spend there, but after starting some of the Sagas, I wished I had.

    Further in you see the reason for the location: the excavation of the oldest known structure in Iceland, a longhouse, is in the museum! Around it are typical displays with text and audio, explaining the structure and what life was like at that time.

    Then I moved into a smaller dark room with an attractive lit podium (see video below). You could touch it, and it controlled the large display on the wall. The display showed the longhouse as a 3-D virtual reconstruction. As you moved your finger around the circles on the podium, the camera rotated so you could get a good look at all parts of the longhouse. As you moved between circles, a short audio would play to introduce you to the next section. Each circle controlled the longhouse display, but the closer to the center the more “inside” the structure you can see. Fortunately, I found someone else made a better video of the interaction than I did:

    The last display was simple, but took planning and thought. Near the exit was a large table display of the longhouse. It was also a touch interface, where you could put your hand on the table to activate information about how parts of the house were used. Think of the challenges: when I was there, it was surrounded by 10 people, all touching it at once. We were all looking for information in different languages. It has to be low enough for everyone to see, but not so low it’s hard to touch. Overall, they did a great job.

    Be sure to do a stopover if you cross the Atlantic!

    Both videos come from Alex Martire on YouTube.

    Treemap sighting in the wild: U.S. Budget proposal

    imageI get pretty excited when I see my favorite infovis being used: The Treemap

    Just released today – the proposed U.S. budget as a treemap!

    So, how well did this visualization work for its intended purpose:

    • Points awarded for using a treemap – it makes it so easy to see how massive social security and healthcare are.
    • Points deducted for the cluttered overlay text in the Transportation section.
    • Points deducted for making the areas clickable, but not actually providing more information beyond a platitude (“Military Personnel: When it comes to our service members and their families, America stands united in support. The budget helps ensure that those who serve our country receive all the support and opportunities they’ve earned and deserve.”)
    • Points deducted for making me click a link to “learn more” from a YouTube video of the entire State of the Union address when I could be learning more with a deeper treemap.

    I’d like to see more of the blocks broken down into the components they fund, making it as informative and transparent as my go-to example of a treemap: the stock market. My second favorite treemap is a program that will treemap your harddrive, making it easy to see where those giant spacehogging files are hiding, deep in directories you forgot were there. I treemapped my lab server with it as we ran out of space and found giant video files about 10 directories down in an unlikely spot that were eating up our GBs.

    Perhaps we could have a treemap that lets us change things in the budget to see how we would make it look, like the American Public Media interactive “Budget Hero” game from a few years ago (now defunct or I would link it)? I learned a LOT about what could budge and what couldn’t budge in the budget from that game.

    *All the points deducted are far outweighed by my support of the treemap being used in the first place! Brilliant!

    Radio interview with Rich

    Our own Rich Pak was interviewed by the Clemson radio show “Your Day.”

    Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

    They cover everything from the birth of human factors psychology to the design of prospective memory aids for older adults. Enjoy!

    Worst Mobile Interface Ever

    I was reading articles the other day and came across a site that, as many do, reformatted for my phone. Almost all reformatted-for-mobile sites are terrible, but this one is my favorite.
    photo
    You cannot scroll through the 21 page article by moving your finger up and down, as would happen on a website. The only way to change pages is via the horizontal slider at the bottom. Good luck trying to move it so slightly it only goes forward one page! And yes, moving the slider left and right does move the page up and down.

    Usability process not used for ACA website

    slideA recently released report, done in March 2013, reveals the process of creating Healthcare.gov. Hindsight is always 20/20, but we’ve also worked hard to establish best practices for considering both engineering and the user in software development. These contributions need to be valued, especially for large scale projects. After looking through the slides, one thing I note is that even this improved approach barely mentions the end users of the website. There is one slide that states “Identify consumer paths; review and modify vignettes.” The two examples of this are users who have more or less complex needs when signing up for insurance. I don’t see any mention of involving actual users prior to release.

    The NPR write-up states:

    Consultants noted there was no clear leader in charge of this project, which we now know contributed to its disastrous release. And there was no “end-to-end testing” of its full implementation, something we now know never happened.

    Some of this may fall on us, for not being convincing enough that human factors methods are worth the investment. How much would the public be willing to pay for a solid usability team to work with the website developers?

    App Usability Evaluations for the Mental Health Field

    We’ve posted before on usability evaluations of iPads and apps for academics (e.g.,here, and here), but today I’d like to point to a blog dedicated to evaluating apps for mental health professionals.

    In the newest post, Dr. Jeff Lawley discusses the usability of a DSM Reference app from Kitty CAT Psych. For those who didn’t take intro psych in college, the DSM is the Diagnostic and Statistical Manual, which classifies symptoms into disorders. It’s interesting to read an expert take on this app – he considers attributes I would not have thought of, such as whether the app retains information (privacy issues).

    As Dr. Lawley notes on his “about” page, there are few apps designed for mental health professionals and even fewer evaluations of these apps. Hopefully his blog can fill that niche and inspire designers to create more mobile tools for these professionals.