Institutional Memory, Culture, & Disaster

I admit a fascination for reading about disasters. I suppose I’m hoping for the antidote. The little detail that will somehow protect me next time I get into a plane, train, or automobile. A gris-gris for the next time I tie into a climbing rope. Treating my bike helmet as a talisman for my commute. So far, so good.

As human factors psychologists and engineers, we often analyze large scale accidents and look for the reasons (pun intended) that run deeper than a single operator’s error. You can see some of my previous posts on Wiener’s Laws, Ground Proximity Warnings, and the Deep Water Horizon oil spill.

So, I invite you to read this wonderfully detailed blog post by Ron Rapp about how safety culture can slowly derail, “normalizing deviance.”

Bedford and the Normalization of Deviance

He tells the story of a chartered plane crash in Bedford, Massachusetts in 2014, a take-off with so many skipped safety steps and errors that it seemed destined for a crash. There was plenty of time for the pilot stop before the crash, leading Rapp to say “It’s the most inexplicable thing I’ve yet seen a professional pilot do, and I’ve seen a lot of crazy things. If locked flight controls don’t prompt a takeoff abort, nothing will.” He sums up the reasons for these pilot’s “deviant” performance via Diane Vaughn’s factors of normalization (some interpretation on my part, here):

  • If rules and checklists and regulations are difficult, tedious, unusable, or interfere with the goal of the job at hand, they will be misused or ignored.
  • We can’t treat top-down training or continuing education as the only source of information. People pass on shortcuts, tricks, and attitudes to each other.
  • Reward the behaviors you want. But we tend to punish safety behaviors when they delay secondary (but important) goals, such as keeping passengers happy.
  • We can’t ignore the social world of the pilots and crew. Speaking out against “probably” unsafe behaviors is at least as hard as calling out a boss or coworker who makes “probably” racist or sexist comments. The higher the ambiguity, the less likely people take action (“I’m sure he didn’t mean it that way.” or “Well, we skipped that list, but it’s been fine the ten times so far.”)
  • The cure? An interdisciplinary solution coming from human factors psychologists, designers, engineers, and policy makers. That last group might be the most important, in that they recognize a focus on safety is not necessarily more rules and harsher punishments. It’s checking that each piece of the system is efficient, valued, and usable and that those systems work together in an integrated way.

    Thanks to Travis Bowles for the heads-up on this article.
    Feature photo from the NTSB report, photo credit to the Massachusetts Police.

    Thoughtful and Fun Interfaces in the Reykjavik City Museum

    I stopped over in Iceland on the way to a conference and popped in to the Reykjavik City Museum, not knowing what I’d find. I love the idea of technology in a museum, but I’m usually disappointed. Either the concepts are bad, the technology is silly (press a button, light some text), or it just doesn’t work, beaten into submission by armies of 4-year-olds.

    Not at the Settlement Exhibit in Reykjavik. There are two unique interfaces I want to cover, but I’ll start at the beginning with a more typical touchscreen that controlled a larger wall display. As you enter the museum, there are multiple stations for reading pages of the Sagas. These are the stories of their history, from the 9th to 11th centuries, and beautifully illustrated.
    njals_saga_miniature
    They have been scanned, so you can browse the pages (with translations) and not damage them. I didn’t have all day to spend there, but after starting some of the Sagas, I wished I had.

    Further in you see the reason for the location: the excavation of the oldest known structure in Iceland, a longhouse, is in the museum! Around it are typical displays with text and audio, explaining the structure and what life was like at that time.

    Then I moved into a smaller dark room with an attractive lit podium (see video below). You could touch it, and it controlled the large display on the wall. The display showed the longhouse as a 3-D virtual reconstruction. As you moved your finger around the circles on the podium, the camera rotated so you could get a good look at all parts of the longhouse. As you moved between circles, a short audio would play to introduce you to the next section. Each circle controlled the longhouse display, but the closer to the center the more “inside” the structure you can see. Fortunately, I found someone else made a better video of the interaction than I did:

    The last display was simple, but took planning and thought. Near the exit was a large table display of the longhouse. It was also a touch interface, where you could put your hand on the table to activate information about how parts of the house were used. Think of the challenges: when I was there, it was surrounded by 10 people, all touching it at once. We were all looking for information in different languages. It has to be low enough for everyone to see, but not so low it’s hard to touch. Overall, they did a great job.

    Be sure to do a stopover if you cross the Atlantic!

    Both videos come from Alex Martire on YouTube.

    Tesla is wrong to use “autopilot” term

    Self driving cars are a hot topic!   See this Wikipedia page on Autonomous cars for a short primer.  This post is mainly a bit of exploration of how the technology is presented to the user.

    Tesla markets their self driving technology using the term “Autopilot”.  The German government is apparently unhappy with the use of that term because it could be misleading (LA Times):

    Germany’s transport minister told Tesla to cease using the Autopilot name to market its cars in that country, under the theory that the name suggests the cars can drive themselves without driver attention, the news agency Reuters reported Sunday.

    Tesla wants to be perceived as first to market with a fully autonomous car (using the term Autopilot) yet they stress that it is only a driver assistance system and that the driver is meant to stay vigilant.  But I do not think term Autopilot is perceived that way by most lay people.  It encourages an unrealistic expectation and may lead to uncritical usage and acceptance of the technology, or complacency.

    Complacency can be described and manifested as:

    • too much trust in the automation (more than warranted)
    • allocation of attention to other things and not monitoring the proper functioning of automation
    • over-reliance on the automation (letting it carry out too much of the task)
    • reduced awareness of one’s surroundings (situation awareness)

    Complacency is especially dangerous when unexpected situations occur and the driver must resume manual control.  The non-profit Consumer Reports says:

    “By marketing their feature as ‘Autopilot,’ Tesla gives consumers a false sense of security,” says Laura MacCleery, vice president of consumer policy and mobilization for Consumer Reports. “In the long run, advanced active safety technologies in vehicles could make our roads safer. But today, we’re deeply concerned that consumers are being sold a pile of promises about unproven technology. ‘Autopilot’ can’t actually drive the car, yet it allows consumers to have their hands off the steering wheel for minutes at a time. Tesla should disable automatic steering in its cars until it updates the program to verify that the driver’s hands are on the wheel.”

    Companies must commit immediately to name automated features with descriptive—not exaggerated—titles, MacCleery adds, noting that automakers should roll out new features only when they’re certain they are safe.

    Tesla responded that:

    “We have great faith in our German customers and are not aware of any who have misunderstood the meaning, but would be happy to conduct a survey to assess this.”

    But Tesla is doing a disservice by marketing their system using the term AutoPilot and by selectively releasing video of the system performing flawlessly:

    Using terms such as Autopilot, or releasing videos of near perfect instances of the technology will only hasten the likelihood of driver complacency.

    But no matter how they are marketed, these systems are just machines that rely on high quality sensor input (radar, cameras, etc).  Sensors can fail, GPS data can be old, or situations can change quickly and dramatically (particularly on the road).  The system WILL make a mistake–and on the road, the cost of that single mistake can be deadly.

    Parasuraman and colleagues have heavily researched how humans behave when exposed to highly reliable automation in the context of flight automation/autopilot systems.  In a classic study, they first induced a sense of complacency by exposing participants to highly reliable automation.  Later,  when the automation failed, the more complacent participants were much worse at detecting the failure (Parasuraman, Molloy, & Singh, 1993).

    Interestingly, when researchers examined very autonomous autopilot systems in aircraft, they found that pilots were often confused or distrustful of the automation’s decisions (e.g., initiating course corrections without any pilot input) suggesting LOW complacency.  But it is important to note that pilots are highly trained, and have probably not been subjected to the same degree of effusively positive marketing that the public is being subjected regarding the benefits of self-driving technology.  Tesla, in essence, tells drivers to “trust us“, further increasing the likelihood of driver complacency:

    We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. Eight surround cameras provide 360 degree visibility around the car at up to 250 meters of range. Twelve updated ultrasonic sensors complement this vision, allowing for detection of both hard and soft objects at nearly twice the distance of the prior system. A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength, capable of seeing through heavy rain, fog, dust and even the car ahead.

    To make sense of all of this data, a new onboard computer with more than 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software. Together, this system provides a view of the world that a driver alone cannot access, seeing in every direction simultaneously and on wavelengths that go far beyond the human senses.

    References

    Parasuraman, R., & Molloy, R. (1993). Performance consequences of automation-induced“complacency.” International Journal of Aviation Psychology, 3(1), 1-23.

    Some other key readings on complacency:

    Parasuraman, R. (2000). Designing automation for human use: empirical studies and quantitative models. Ergonomics, 43(7), 931–951. http://doi.org/10.1080/001401300409125

    Parasuraman, R., & Wickens, C. D. (2008). Humans: Still vital after all these years of automation. Human Factors, 50(3), 511–520. http://doi.org/10.1518/001872008X312198

    Parasuraman, R., Manzey, D. H., & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381–410. http://doi.org/10.1177/0018720810376055

     

    Human Factors Potpourri

    Some recent items in the news with a human factors angle:

    • What happened to Google Maps?  Interesting comparison of Google Maps from 2010/2016 by designer/cartographer Justin O’Beirne.
    • India will use 3D paintings to slow down drivers.  Excellent use of optical illusions for road safety.
    • Death by GPS.  GPS mis-routing is the easiest and most relatable example of human-automaiton interaction.  Unfortunately, to its detriment, this article does not discuss the automation literature, instead focusing on more basic processes that, I think, are less relevant.

    So you want to go to school for Human Factors: Final Steps

    This is Post 4 in our ongoing series about graduate school in Human Factors. (Post 1 & Post 2 & Post 3)

    1. Prepare your materials and apply

    • Take the GRE. Most programs will require your GRE scores. You’ll want to do this early, in case you need to take it again. You can and should study for the GRE – no matter what people tell you, studying affects scores. Why is a good GRE so important? It is not only about getting admitted. GRE scores are often used in allocating fellowships, RAs, and TAs. A bonus fellowship could mean as much as a 30% increase in your funding offer.
    • Select at least 3 people to write letters of reference on your behalf. They should be faculty who know you well and can speak about your ability to succeed in graduate school.
      Do not include letter writers such as family, friends, pastors, or other “character references.” They hold little to no weight and may count against you if the review committee assumes you couldn’t find academic references.
    • When selecting letter writers, ask them if they can write, “a positive recommendation” instead of just “a recommendation.” You want an honest answer. A recommendation from a class instructor that just says “This person was in my class. They seemed interested. They received X grade” doesn’t mean much to the review committee. You should alert letter-writers ahead of the first deadline, at least a month preferably two.
    • Even for professors you know well, it never hurts to remind them of all the research activities you’ve had and what you learned from them. A page with a bulleted list will help jog the memory of your letter writer to help them write a detailed and personal letter.

    2. Wait!

    • You’ll probably hear in February about acceptance, but it may be as late as the end of March. If you were put on a waitlist, you might not know until just before the April 15th deadline. This is because schools may have put out offers and are waiting to hear if they are accepted before making an offer to you. There is no shame in coming from the waitlist – even the waitlists are very competitive for PhD programs.

    Wiener’s Laws

    The article “The Human Factor” in Vanity Fair is two years old, but since I can’t believe I missed posting it — here it is! It’s a riveting read with details of the Air France Flight 447 accident and intelligent discussion of the impact automation has on human performance. Dr. Nadine Sarter is interviewed and I learned of a list of flight-specific “laws” developed by Dr. Earl Wiener, a past-president of HFES.

    “Wiener’s Laws,” from the article and from Aviation Week:

    • Every device creates its own opportunity for human error.
    • Exotic devices create exotic problems.
    • Digital devices tune out small errors while creating opportunities for large errors.
    • Invention is the mother of necessity.
    • Some problems have no solution.
    • It takes an airplane to bring out the worst in a pilot.
    • Whenever you solve a problem, you usually create one. You can only hope that the one you created is less critical than the one you eliminated.
    • You can never be too rich or too thin (Duchess of Windsor) or too careful about what you put into a digital flight-guidance system (Wiener).
    • Complacency? Don’t worry about it.
    • In aviation, there is no problem so great or so complex that it cannot be blamed on the pilot.
    • There is no simple solution out there waiting to be discovered, so don’t waste your time searching for it.
    • If at first you don’t succeed… try a new system or a different approach.
    • In God we trust. Everything else must be brought into your scan.
    • It takes an airplane to bring out the worst in a pilot.
    • Any pilot who can be replaced by a computer should be.
    • Today’s nifty, voluntary system is tomorrow’s F.A.R.

    Kudos to the author, William Langewiesche, for a well researched and well written piece.

    So you want to go to school for Human Factors: The Approach Email

    This is Post 3 in our ongoing series about graduate school in Human Factors. (Post 1 & Post 2)

    Your initial email communication is your first impression and should be managed carefully. Address all communications formally and you may want someone to proof-read before you send it. That means:

    1. Address everyone by their proper title

    • Bad: “Hi Rich…” or, “Hey” or just launching into the message
    • Good: “Dr. McLaughlin,” or “Professor McLaughlin,”

    2. Be specific.

    • Bad: “I am very interested in your research on X. It is very interesting. The more I read about it, the more I am interested in it. It seems very interesting and important.”
    • Good: “I recently read a collection of your papers on X. It was very interesting to me as I saw connections with the topics I have been studying, such as Y.”

    3. Be succinct! Omit needless words.

    4. Stay on topic/avoid excessive personal anecdotes:

    • Bad: “After my house burned down and I lost everything, I sat back and thought about what I really wanted in life and discovered it was to work in your lab.”
    • Bad: writing a wall of text (e.g., one giant paragraph with no line breaks)
    • Good: “I was fortunate to learn about the field of human factors when we had a special topics course in Ergonomics at my university. For that class, I did [describe project] which lead me to your work on X.”

    5. Avoid inadvertently selfish language

    • Bad: “Your lab would help me in my interests and my career. It would be the best thing for me.”
    • Good: “I have experience in multiple statistical programs, including SPSS and MATlab. As a research assistant in Dr. X’s lab, I have experience with data entry, cleaning data, and analysis. Although I have not yet gotten to run participants through a study protocol, I have been allowed to observe the graduate students in that task.”

    6. Proofread for grammar and typos

    • Bad: your vs you’re, any misspelled words, and so on.

    7. Avoid carelessness: Sending an email to Dr. A but writing your emails addressed to Dr. B.

    Below is a sample “approach email” to the professor you are considering as an advisor. Yours will differ, but this is an example of the level of formality and what to include.

    Dear Dr. FutureAdvisor,

    I am a senior psychology major at My University and interested in pursuing a Ph.D. in Human Factors Psychology after graduation. I came across your research when I was collecting articles for a literature review on user trust in automated systems and am interested in applying to your lab to work on similar topics.

    In the past two years, I have worked as a research assistant in a lab here at My University and spent a summer in an NSF REU program at Bigger University. In the MU1 lab, I worked with Dr. So Andso on research into motivation changes across the lifespan. I learned to enter and clean data for analysis with SPSS and SAS, follow a research protocol to run participants, and write SPSS syntax. One specific project I worked on was investigating whether people over 65 reported different motivations for performance and whether they responded differently than younger adults to reinforcement schedules on an implicit learning task. This gave me an interest in aging but more generally an interest in individual differences.

    I am excited by the prospect of continuing in a research program after graduation and believe I would be a good fit for your lab. Please let me know if you will be accepting applications this year.

    Thank you for your time,
    YourName

    The optimal time for sending this email is the fall semester of your senior year. This gives you time to communicate, perhaps plan a visit, and let the faculty member know you’ll be applying to their program.

    Discussion of Human Factors on “Big Picture Science” podcast

    You all know I love podcasts. One of my favorites, Big Picture Science, held an interview with Nicholas Carr (a journalist) on over-reliance in automation. The entire podcast, What the Hack, also covers computer security. To skip to the HF portion, click here.

    • +points for mentioning human factors by name
    • +points for clearly having read much of the trust in automation literature
    • -points for falling back on the “we automate because we’re lazy” claim, rather than acknowledging that the complexity of many modern systems requires automation for a human to be able to succeed. Do you want to have that flight to NY on the day you want it? Then we have to have automation to help that happen – the task has moved beyond human ability to accomplish it alone.
    • -points for the tired argument that things are different now. Google is making us dumber. Essentially the same argument that happens with every introduction of technology, including the printing press. We aren’t any different than the humans that painted caves 17,300 years ago.

    For more podcasts on humans and automation, check out this recent Planet Money: The Big Red Button. You’ll never look at an elevator the same way.

    *While looking up support for the claim that people have always thought their era was worse than the previous, I found this blog post. Looks like I’m not the first to have this exact thought.

    Not blaming the user since 2007!

    %d bloggers like this: