Death by GPS. GPS mis-routing is the easiest and most relatable example of human-automaiton interaction. Unfortunately, to its detriment, this article does not discuss the automation literature, instead focusing on more basic processes that, I think, are less relevant.
The article “The Human Factor” in Vanity Fair is two years old, but since I can’t believe I missed posting it — here it is! It’s a riveting read with details of the Air France Flight 447 accident and intelligent discussion of the impact automation has on human performance. Dr. Nadine Sarter is interviewed and I learned of a list of flight-specific “laws” developed by Dr. Earl Wiener, a past-president of HFES.
You all know I love podcasts. One of my favorites, Big Picture Science, held an interview with Nicholas Carr (a journalist) on over-reliance in automation. The entire podcast, What the Hack, also covers computer security. To skip to the HF portion, click here.
+points for mentioning human factors by name
+points for clearly having read much of the trust in automation literature
-points for falling back on the “we automate because we’re lazy” claim, rather than acknowledging that the complexity of many modern systems requires automation for a human to be able to succeed. Do you want to have that flight to NY on the day you want it? Then we have to have automation to help that happen – the task has moved beyond human ability to accomplish it alone.
-points for the tired argument that things are different now. Google is making us dumber. Essentially the same argument that happens with every introduction of technology, including the printing press. We aren’t any different than the humans that painted caves 17,300 years ago.
The field of artificial intelligence is obviously a computational and engineering problem: designing a machine (i.e., robot) or software that can emulate thinking to a high degree. But eventually, any AI must interact with a human either by taking control of a situation from a human (e.g., flying a plane) or suggesting courses of action to a human.
I thought this recent news item about potentially dangerous AI might be a great segue to another discussion of human-automation interaction. Specifically, to a detail that does not frequently get discussed in splashy news articles or by non-human-factors people: degree of automation. This blog post is heavily informed by a proceedings paper by Wickens, Li, Santamaria, Sebok, and Sarter (2010).
First, to HF researchers, automation is a generic term that encompasses anything that carries out a task that was once done by a human. Such as robotic assembly, medical diagnostic aids, digital camera scene modes, and even hypothetical autonomous weapons with AI. These disparate examples simply differ in degree of automation.
Let’s back up for a bit: Automation can be characterized by two independent dimensions:
STAGE or TYPE: What is it doing and how is it doing it?
LEVEL: How much it is doing?
Stage/Type of automation describes the WHAT tasks are being automated and sometimes how. Is the task perceptual, like enhancing vision at night or amplifying certain sounds? Or is the automation carrying out a task that is more cognitive, like generating the three best ways to get to your destination in the least amount of time?
The second dimension, Level, refers to the balance of tasks shared between the automation and the human; is the automation doing a tiny bit of the task and then leaving the rest to the user? Or is the automation acting completely on its own with no input from the operator (or ability to override)?
If you imagine STAGE/TYPE (BLUE/GREEN) and LEVEL (RED) as the X and Y of a chart (below), it becomes clearer how various everyday examples of automation fit into the scheme. As LEVEL and/or TYPE increase, we get a higher degree of automation (dotted line).
Mainstream discussions of AI and its potential dangers seem to be focusing on a hypothetical ultra-high degree of automation. A hypothetical weapon that will, on its own, determine threats and act. There are actually very few examples of such a high level of automation in everyday life because cutting the human completely “out of the loop” can have severely negative human performance consequences.
The figure below shows some examples of automation and where they fit into the scheme:
Wickens et al., (2010) use the phrase, “the higher they are, the farther they fall.” This means that when humans interact with greater degrees of automation, they do fine if it works correctly, but will encounter catastrophic consequences when automation fails (and it always will at some point). Why? Users get complacent with high DOA automation, they forget how to do the task themselves, or they loose track of what was going on before the automation failed and thus cannot recover from the failure so easily.
You may have experienced a mild form of this if your car has a rear-backup camera. Have you ever rented a car without one? How do you feel? That feelings tends to get magnified with higher degrees of automation.
So, highly autonomous weapons (or any high degree of automation) is not only a philosophically bad/evil idea, it is bad for human performance!
Sometimes it’s good to take a step back from the seriousness of our work and find new focus. H(aiku)man factors is the brainchild of my colleague Douglas Gillan. Each summarizes a concept in the field while following the haiku form of 5-7-5 and an emphasis on juxtoposition and inclusion of nature. Enjoy and contribute your own in the comments!
All of the above are by Doug Gillan.
Inattentional blindness by Allaire Welk
Challenging primary task
Did you notice it?
Affordances by Lawton Pybus
round, smooth ball is thrown
rolls, stops at the flat, wing-back
chair on which I sit
Escalation by Olga Zielinska
headache, blurred vision
do not explore Web MD
it’s not a tumor
Automatic Processing by Anne McLaughlin
end of the workday
finally get to go home
arugh, forgot groceries
Automation by Richard Pak
No wait, I’ll get it myself
Drat, I forgot how
Prospective Memory by Natalee Baldwin
I forgot the milk!
Prospective memory failed
Use a reminder
Working Memory by Will Leidheiser
how much can I remember?
many things at once.
Coincidentally, the topic of social/human-technology interaction is in the news quite a bit today. I’m pleased that the topic of the human factors implications of the social interaction with technology is getting more focus.
Dr. Rogers has been experimenting with a large robot called the PR2, made by Willow Garage, a robotics company in Palo Alto, Calif., which can fetch and administer medicine, a seemingly simple act that demands a great deal of trust between man and machine.
“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.
In a more ambitious use of technology, NPR is reporting that researchers are using computer-generated avatars as interviewers to detect soldiers who are susceptible to suicide. Simultaneously, facial movement patterns of the interviewee are recorded:
“For each indicator,” Morency explains, “we will display three things.” First, the report will show the physical behavior of the person Ellie just interviewed, tallying how many times he or she smiled, for instance, and for how long. Then the report will show how much depressed people typically smile, and finally how much healthy people typically smile. Essentially it’s a visualization of the person’s behavior compared to a population of depressed and non-depressed people.
While this sounds like an interesting application, I have to agree with with one of its critics that:
“It strikes me as unlikely that face or voice will provide that information with such certainty,” he says.
At worst, it will flood the real therapist with a “big data”-type situation where there may be “signal” but way too much noise (see this article).
Anne and I are big proponents of making sure the world knows what human factors is all about (hence the blog). Both of us were recently interviewed separately about human factors in general as well as our research areas.
The tone is very general and may give lay people a good sense of the breadth of human factors. Plus, you can hear how we sound!
First, Anne was just interviewed for the radio show “Radio In Vivo“.
Late last year, I was interviewed about human factors and my research on the local public radio program Your Day:
Hello readers, and sorry for the unintentional hiatus on the blog. Anne and I have been recovering from the just-completed semester only to be thrown back into another busy semester. As we adjust, feast on this potpourri post of interesting HF-related items from the past week.
In todays HF potpourri we have three very interesting and loosely related stories:
The BBC looks at the rise of websites that seem to talk to us in a very informal, casual way. Clearly, the effect on the user is not what was intended:
The difference is the use of my name. I also have a problem with people excessively using my name. I feel it gives them some power over me and overuse implies disingenuousness. Like when you ring a call centre where they seem obsessed with saying your name.
With the release of Apple’s in-house developed mapping solution for the new iPhone 5 (and all iOS 6 devices) there has been a major outcry among some users bordering on ridiculous, frothing, outrage1.
Personally, the maps for my area are pretty good and the route guidance worked well even with no network signal.
However, some of the public reaction to the new mapping program is an excellent example of too much reliance on automation that is usually very reliable but falible (we’ve written about here, and here.).
It is very hard to discern what too much reliance looks like until the automation fails. Too much reliance means that you do not double-check the route guidance information, or you ignore other external information (e.g., the bridge is out).
I’ve had my own too-much-reliance experience with mobile Google Maps (documented on the blog). My reaction after failure was to be less trusting which led to decreased reliance (and increased “double checking”). Apple’s “PR disaster” is a good wake up call about users unreasonably high trust in very reliable automation that can (and will) fail. Unfortunately, I don’t think it will impact user’s perception that all technology, while seemingly reliable, should not be blindly trusted.
Some human factors lessons here (and interesting research questions for the future) are:
How do we tell the user that they need to double check? (aside from a warning)
How should the system convey it’s confidence? (if it is unsure, how do you tell the user so they adjust their unreasonably high expectations)
1I say “outrage” because those users who most needed phone-based voice navigation probably had to own third party apps for it (I used the Garmin app). The old Google Maps for iPhone never had that functionality. So the scale of the outrage seems partially media-generated.