The Patient Writes the Prescription

jeep

I took the photo above in my brother-in-laws 2015 Jeep Grand Cherokee EcoDiesel. It says “Exhaust Filter Nearing Full Safely Drive at Highway Speeds to Remedy.”

I’d never seen anything like that before neither had he – it seemed like a terrible idea at first. What if the person couldn’t drive at highway speeds right then? Spending an unknown time driving at highway speeds wasting gas also seemed unpleasant. My brother-in-law said that he was having issues with the car before, but it wasn’t until the Jeep downloaded a software update that it displayed this message on the dashboard.

My own car will be 14 years old this year (nearing an age where it can get its own learner’s permit?), so I had to adjust to the idea of a car that updated itself. I was intrigued by the issue and looked around to see what other Jeep owners had to say.

I found another unhappy customer at the diesel Jeep forum:

At the dealer a very knowledgeable certified technician explained to me that the problem is that we had been making lots of short trips in town, idling at red lights, with the result that the oil viscosity was now out of spec and that the particulate exhaust filter was nearly full and needed an hour of 75 mph driving to get the temperature high enough to burn off the accumulated particulates. No person and no manual had ever ever mentioned that there is a big problem associated with city driving.

And further down the rabbit hole, I found it wasn’t just the diesel Jeep. This is from a Dodge Ram forum:

I have 10,000K on 2014 Dodge Ram Ecodiesel. Warning came on that exhaust filter 90% full. Safely drive at highway speeds to remedy. Took truck on highway & warning changed to exhaust system regeneration in process. Exhaust filter 90% full.
All warnings went away after 20 miles. What is this all about?

It looks like Jeep added a supplement to their owners manual in 2015 to explain the problem:

Exhaust Filter XX% Full Safely Drive at Highway Speeds to Remedy — This message will be displayed on the Driver Information Display (DID) if the exhaust particulate filter reaches 80% of its maximum storage capacity. Under conditions of exclusive short duration and low speed driving cycles, your diesel engine and exhaust after-treatment system may never reach the conditions required to cleanse the filter to remove the trapped PM. If this occurs, the “Exhaust Filter XX% Full Safely Drive at Highway Speeds to Remedy” message will be displayed in the DID. If this message is displayed, you will hear one chime to assist in alerting you of this condition. By simply driving your vehicle at highway speeds for up to 20 minutes, you can remedy the condition in the particulate filter system and allow your diesel engine and exhaust after-treatment system to cleanse the filter to remove the trapped PM and restore the system to normal operating condition.

But now that I’ve had time to think about it, I agree with the remedy. After all,my own car just has a ‘check engine’ light no matter what the issue. Twenty minutes on the highway is a lot easier than scheduling a trip to a mechanic.

What could be done better is the communication of the warning. It tells you what to do, and sort of why, but not how long you have to execute the action or the consequences of not acting. The manual contains a better explanation of why (although the 20 minutes there does not match the 60 minute estimate of at least one expert), not that many people read the manual. Also, the manual doesn’t match the message. The manual says you’ll receive a % full, but the message just said “nearly.” The dash display should direct the driver to more information in the manual. Or, with such a modern display, perhaps scroll to reveal more information (showing partial text, so the driver knows to scroll). Knowing the time to act is more critical, and maybe a % would do that since the driver can probably assume he or she can drive closer to 100% before taking action. It looks as though the driver needs to find a way to drive at highway speeds right now, but hopefully that is not the case. I can’t say for sure though, since neither the manual nor the display told me the answer.

Calling the Media out for Misleading InfoViz

I was reading an article on my local news today and saw this graphic, apparently made for the article.

screenshot-2016-12-10-at-12-14-30-pm
Being from Alabama, and just a pattern-recognition machine in general, I immediately noticed it was an anomaly. The lightest pink surrounded on all sides by the darkest red? Unlikely. The writer helpfully provided a source though, from the FBI, so I could look at the data myself.

screenshot-2016-12-10-at-12-19-42-pm

There, right at the start, is a footnote for Alabama. It says “3 Limited supplemental homicide data were received.” Illinois is the only other state with a footnote, but because it’s not so different from its neighbors, it didn’t stand out enough for me to notice.

Florida was not contained in the FBI table and thus is grey – a good choice to show there were no data for that state. But as for Alabama and Illinois, it’s misleading to include known bad data in a graph that has no explanations. They should also be grey, rather than imply the limited information is the truth.

I looked up similar data from other sources to check how misleading the graphic was. Because wouldn’t it be nice if my home state had figured out some magic formula for preventing firearm deaths? Unfortunately, The Centers for Disease Control (CDC) statistics on gun deaths put Alabama in the top 4 for the most gun deaths. That’s quite the opposite of the optimism-inducing light pink in the first graphic. The graph below is for 2014 while the first graphic is for 2013, but in case you might be thinking there was some change, I also looked up 2012 (the CDC appears to publish data every two years). The CDC put firearm deaths per person in Alabama even higher that year than in 2014.
screenshot-2016-12-10-at-12-25-54-pm

In closing, I don’t think this graphic was intentionally misleading. Sure, there are plenty of examples where I would be happy to accuse malice instead of bad design. Most times it’s probably just people working under a deadline or with software tools that don’t allow custom corrections. We do have to be careful – I’d hate to see Alabama not receive aid to curb their firearm death rate based on poor information visualizations.

Institutional Memory, Culture, & Disaster

I admit a fascination for reading about disasters. I suppose I’m hoping for the antidote. The little detail that will somehow protect me next time I get into a plane, train, or automobile. A gris-gris for the next time I tie into a climbing rope. Treating my bike helmet as a talisman for my commute. So far, so good.

As human factors psychologists and engineers, we often analyze large scale accidents and look for the reasons (pun intended) that run deeper than a single operator’s error. You can see some of my previous posts on Wiener’s Laws, Ground Proximity Warnings, and the Deep Water Horizon oil spill.

So, I invite you to read this wonderfully detailed blog post by Ron Rapp about how safety culture can slowly derail, “normalizing deviance.”

Bedford and the Normalization of Deviance

He tells the story of a chartered plane crash in Bedford, Massachusetts in 2014, a take-off with so many skipped safety steps and errors that it seemed destined for a crash. There was plenty of time for the pilot stop before the crash, leading Rapp to say “It’s the most inexplicable thing I’ve yet seen a professional pilot do, and I’ve seen a lot of crazy things. If locked flight controls don’t prompt a takeoff abort, nothing will.” He sums up the reasons for these pilot’s “deviant” performance via Diane Vaughn’s factors of normalization (some interpretation on my part, here):

  • If rules and checklists and regulations are difficult, tedious, unusable, or interfere with the goal of the job at hand, they will be misused or ignored.
  • We can’t treat top-down training or continuing education as the only source of information. People pass on shortcuts, tricks, and attitudes to each other.
  • Reward the behaviors you want. But we tend to punish safety behaviors when they delay secondary (but important) goals, such as keeping passengers happy.
  • We can’t ignore the social world of the pilots and crew. Speaking out against “probably” unsafe behaviors is at least as hard as calling out a boss or coworker who makes “probably” racist or sexist comments. The higher the ambiguity, the less likely people take action (“I’m sure he didn’t mean it that way.” or “Well, we skipped that list, but it’s been fine the ten times so far.”)
  • The cure? An interdisciplinary solution coming from human factors psychologists, designers, engineers, and policy makers. That last group might be the most important, in that they recognize a focus on safety is not necessarily more rules and harsher punishments. It’s checking that each piece of the system is efficient, valued, and usable and that those systems work together in an integrated way.

    Thanks to Travis Bowles for the heads-up on this article.
    Feature photo from the NTSB report, photo credit to the Massachusetts Police.

    Thoughtful and Fun Interfaces in the Reykjavik City Museum

    I stopped over in Iceland on the way to a conference and popped in to the Reykjavik City Museum, not knowing what I’d find. I love the idea of technology in a museum, but I’m usually disappointed. Either the concepts are bad, the technology is silly (press a button, light some text), or it just doesn’t work, beaten into submission by armies of 4-year-olds.

    Not at the Settlement Exhibit in Reykjavik. There are two unique interfaces I want to cover, but I’ll start at the beginning with a more typical touchscreen that controlled a larger wall display. As you enter the museum, there are multiple stations for reading pages of the Sagas. These are the stories of their history, from the 9th to 11th centuries, and beautifully illustrated.
    njals_saga_miniature
    They have been scanned, so you can browse the pages (with translations) and not damage them. I didn’t have all day to spend there, but after starting some of the Sagas, I wished I had.

    Further in you see the reason for the location: the excavation of the oldest known structure in Iceland, a longhouse, is in the museum! Around it are typical displays with text and audio, explaining the structure and what life was like at that time.

    Then I moved into a smaller dark room with an attractive lit podium (see video below). You could touch it, and it controlled the large display on the wall. The display showed the longhouse as a 3-D virtual reconstruction. As you moved your finger around the circles on the podium, the camera rotated so you could get a good look at all parts of the longhouse. As you moved between circles, a short audio would play to introduce you to the next section. Each circle controlled the longhouse display, but the closer to the center the more “inside” the structure you can see. Fortunately, I found someone else made a better video of the interaction than I did:

    The last display was simple, but took planning and thought. Near the exit was a large table display of the longhouse. It was also a touch interface, where you could put your hand on the table to activate information about how parts of the house were used. Think of the challenges: when I was there, it was surrounded by 10 people, all touching it at once. We were all looking for information in different languages. It has to be low enough for everyone to see, but not so low it’s hard to touch. Overall, they did a great job.

    Be sure to do a stopover if you cross the Atlantic!

    Both videos come from Alex Martire on YouTube.

    Tesla is wrong to use “autopilot” term

    Self driving cars are a hot topic!   See this Wikipedia page on Autonomous cars for a short primer.  This post is mainly a bit of exploration of how the technology is presented to the user.

    Tesla markets their self driving technology using the term “Autopilot”.  The German government is apparently unhappy with the use of that term because it could be misleading (LA Times):

    Germany’s transport minister told Tesla to cease using the Autopilot name to market its cars in that country, under the theory that the name suggests the cars can drive themselves without driver attention, the news agency Reuters reported Sunday.

    Tesla wants to be perceived as first to market with a fully autonomous car (using the term Autopilot) yet they stress that it is only a driver assistance system and that the driver is meant to stay vigilant.  But I do not think term Autopilot is perceived that way by most lay people.  It encourages an unrealistic expectation and may lead to uncritical usage and acceptance of the technology, or complacency.

    Complacency can be described and manifested as:

    • too much trust in the automation (more than warranted)
    • allocation of attention to other things and not monitoring the proper functioning of automation
    • over-reliance on the automation (letting it carry out too much of the task)
    • reduced awareness of one’s surroundings (situation awareness)

    Complacency is especially dangerous when unexpected situations occur and the driver must resume manual control.  The non-profit Consumer Reports says:

    “By marketing their feature as ‘Autopilot,’ Tesla gives consumers a false sense of security,” says Laura MacCleery, vice president of consumer policy and mobilization for Consumer Reports. “In the long run, advanced active safety technologies in vehicles could make our roads safer. But today, we’re deeply concerned that consumers are being sold a pile of promises about unproven technology. ‘Autopilot’ can’t actually drive the car, yet it allows consumers to have their hands off the steering wheel for minutes at a time. Tesla should disable automatic steering in its cars until it updates the program to verify that the driver’s hands are on the wheel.”

    Companies must commit immediately to name automated features with descriptive—not exaggerated—titles, MacCleery adds, noting that automakers should roll out new features only when they’re certain they are safe.

    Tesla responded that:

    “We have great faith in our German customers and are not aware of any who have misunderstood the meaning, but would be happy to conduct a survey to assess this.”

    But Tesla is doing a disservice by marketing their system using the term AutoPilot and by selectively releasing video of the system performing flawlessly:

    Using terms such as Autopilot, or releasing videos of near perfect instances of the technology will only hasten the likelihood of driver complacency.

    But no matter how they are marketed, these systems are just machines that rely on high quality sensor input (radar, cameras, etc).  Sensors can fail, GPS data can be old, or situations can change quickly and dramatically (particularly on the road).  The system WILL make a mistake–and on the road, the cost of that single mistake can be deadly.

    Parasuraman and colleagues have heavily researched how humans behave when exposed to highly reliable automation in the context of flight automation/autopilot systems.  In a classic study, they first induced a sense of complacency by exposing participants to highly reliable automation.  Later,  when the automation failed, the more complacent participants were much worse at detecting the failure (Parasuraman, Molloy, & Singh, 1993).

    Interestingly, when researchers examined very autonomous autopilot systems in aircraft, they found that pilots were often confused or distrustful of the automation’s decisions (e.g., initiating course corrections without any pilot input) suggesting LOW complacency.  But it is important to note that pilots are highly trained, and have probably not been subjected to the same degree of effusively positive marketing that the public is being subjected regarding the benefits of self-driving technology.  Tesla, in essence, tells drivers to “trust us“, further increasing the likelihood of driver complacency:

    We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. Eight surround cameras provide 360 degree visibility around the car at up to 250 meters of range. Twelve updated ultrasonic sensors complement this vision, allowing for detection of both hard and soft objects at nearly twice the distance of the prior system. A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength, capable of seeing through heavy rain, fog, dust and even the car ahead.

    To make sense of all of this data, a new onboard computer with more than 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software. Together, this system provides a view of the world that a driver alone cannot access, seeing in every direction simultaneously and on wavelengths that go far beyond the human senses.

    References

    Parasuraman, R., & Molloy, R. (1993). Performance consequences of automation-induced“complacency.” International Journal of Aviation Psychology, 3(1), 1-23.

    Some other key readings on complacency:

    Parasuraman, R. (2000). Designing automation for human use: empirical studies and quantitative models. Ergonomics, 43(7), 931–951. http://doi.org/10.1080/001401300409125

    Parasuraman, R., & Wickens, C. D. (2008). Humans: Still vital after all these years of automation. Human Factors, 50(3), 511–520. http://doi.org/10.1518/001872008X312198

    Parasuraman, R., Manzey, D. H., & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381–410. http://doi.org/10.1177/0018720810376055

     

    Human Factors Potpourri

    Some recent items in the news with a human factors angle:

    • What happened to Google Maps?  Interesting comparison of Google Maps from 2010/2016 by designer/cartographer Justin O’Beirne.
    • India will use 3D paintings to slow down drivers.  Excellent use of optical illusions for road safety.
    • Death by GPS.  GPS mis-routing is the easiest and most relatable example of human-automaiton interaction.  Unfortunately, to its detriment, this article does not discuss the automation literature, instead focusing on more basic processes that, I think, are less relevant.

    So you want to go to school for Human Factors: Final Steps

    This is Post 4 in our ongoing series about graduate school in Human Factors. (Post 1 & Post 2 & Post 3)

    1. Prepare your materials and apply

    • Take the GRE. Most programs will require your GRE scores. You’ll want to do this early, in case you need to take it again. You can and should study for the GRE – no matter what people tell you, studying affects scores. Why is a good GRE so important? It is not only about getting admitted. GRE scores are often used in allocating fellowships, RAs, and TAs. A bonus fellowship could mean as much as a 30% increase in your funding offer.
    • Select at least 3 people to write letters of reference on your behalf. They should be faculty who know you well and can speak about your ability to succeed in graduate school.
      Do not include letter writers such as family, friends, pastors, or other “character references.” They hold little to no weight and may count against you if the review committee assumes you couldn’t find academic references.
    • When selecting letter writers, ask them if they can write, “a positive recommendation” instead of just “a recommendation.” You want an honest answer. A recommendation from a class instructor that just says “This person was in my class. They seemed interested. They received X grade” doesn’t mean much to the review committee. You should alert letter-writers ahead of the first deadline, at least a month preferably two.
    • Even for professors you know well, it never hurts to remind them of all the research activities you’ve had and what you learned from them. A page with a bulleted list will help jog the memory of your letter writer to help them write a detailed and personal letter.

    2. Wait!

    • You’ll probably hear in February about acceptance, but it may be as late as the end of March. If you were put on a waitlist, you might not know until just before the April 15th deadline. This is because schools may have put out offers and are waiting to hear if they are accepted before making an offer to you. There is no shame in coming from the waitlist – even the waitlists are very competitive for PhD programs.

    Not blaming the user since 2007!

    %d bloggers like this: