Category Archives: automation

Human Factors Potpourri

Some recent items in the news with a human factors angle:

  • What happened to Google Maps?  Interesting comparison of Google Maps from 2010/2016 by designer/cartographer Justin O’Beirne.
  • India will use 3D paintings to slow down drivers.  Excellent use of optical illusions for road safety.
  • Death by GPS.  GPS mis-routing is the easiest and most relatable example of human-automaiton interaction.  Unfortunately, to its detriment, this article does not discuss the automation literature, instead focusing on more basic processes that, I think, are less relevant.

Wiener’s Laws

The article “The Human Factor” in Vanity Fair is two years old, but since I can’t believe I missed posting it — here it is! It’s a riveting read with details of the Air France Flight 447 accident and intelligent discussion of the impact automation has on human performance. Dr. Nadine Sarter is interviewed and I learned of a list of flight-specific “laws” developed by Dr. Earl Wiener, a past-president of HFES.

“Wiener’s Laws,” from the article and from Aviation Week:

  • Every device creates its own opportunity for human error.
  • Exotic devices create exotic problems.
  • Digital devices tune out small errors while creating opportunities for large errors.
  • Invention is the mother of necessity.
  • Some problems have no solution.
  • It takes an airplane to bring out the worst in a pilot.
  • Whenever you solve a problem, you usually create one. You can only hope that the one you created is less critical than the one you eliminated.
  • You can never be too rich or too thin (Duchess of Windsor) or too careful about what you put into a digital flight-guidance system (Wiener).
  • Complacency? Don’t worry about it.
  • In aviation, there is no problem so great or so complex that it cannot be blamed on the pilot.
  • There is no simple solution out there waiting to be discovered, so don’t waste your time searching for it.
  • If at first you don’t succeed… try a new system or a different approach.
  • In God we trust. Everything else must be brought into your scan.
  • It takes an airplane to bring out the worst in a pilot.
  • Any pilot who can be replaced by a computer should be.
  • Today’s nifty, voluntary system is tomorrow’s F.A.R.

Kudos to the author, William Langewiesche, for a well researched and well written piece.

Discussion of Human Factors on “Big Picture Science” podcast

You all know I love podcasts. One of my favorites, Big Picture Science, held an interview with Nicholas Carr (a journalist) on over-reliance in automation. The entire podcast, What the Hack, also covers computer security. To skip to the HF portion, click here.

  • +points for mentioning human factors by name
  • +points for clearly having read much of the trust in automation literature
  • -points for falling back on the “we automate because we’re lazy” claim, rather than acknowledging that the complexity of many modern systems requires automation for a human to be able to succeed. Do you want to have that flight to NY on the day you want it? Then we have to have automation to help that happen – the task has moved beyond human ability to accomplish it alone.
  • -points for the tired argument that things are different now. Google is making us dumber. Essentially the same argument that happens with every introduction of technology, including the printing press. We aren’t any different than the humans that painted caves 17,300 years ago.

For more podcasts on humans and automation, check out this recent Planet Money: The Big Red Button. You’ll never look at an elevator the same way.

*While looking up support for the claim that people have always thought their era was worse than the previous, I found this blog post. Looks like I’m not the first to have this exact thought.

Prominent figures warn of dangerous Artificial Intelligence (it’s probably a bad HF idea too)

Recently, some very prominent scientists and other figures have warned of the consequences of autonomous weapons, or more generally artificial intelligence run amok.

The field of artificial intelligence is obviously a computational and engineering problem: designing a machine (i.e., robot) or software that can emulate thinking to a high degree.   But eventually, any AI must interact with a human either by taking control of a situation from a human (e.g., flying a plane) or suggesting courses of action to a human.

I thought this recent news item about potentially dangerous AI might be a great segue to another discussion of human-automation interaction.  Specifically, to a detail that does not frequently get discussed in splashy news articles or by non-human-factors people:  degree of automation. This blog post is heavily informed by a proceedings paper by Wickens, Li, Santamaria, Sebok, and Sarter (2010).

First, to HF researchers, automation is a generic term that encompasses anything that carries out a task that was once done by a human.  Such as robotic assembly, medical diagnostic aids, digital camera scene modes, and even hypothetical autonomous weapons with AI.  These disparate examples simply differ in degree of automation.

Let’s back up for a bit: Automation can be characterized by two independent dimensions:

  • STAGE or TYPE:  What is it doing and how is it doing it?
  • LEVEL: How much it is doing?

Stage/Type of automation describes the WHAT tasks are being automated and sometimes how.  Is the task perceptual, like enhancing vision at night or amplifying certain sounds?  Or is the automation carrying out a task that is more cognitive, like generating the three best ways to get to your destination in the least amount of time?

The second dimension, Level, refers to the balance of tasks shared between the automation and the human; is the automation doing a tiny bit of the task and then leaving the rest to the user?  Or is the automation acting completely on its own with no input from the operator (or ability to override)?

If you imagine STAGE/TYPE (BLUE/GREEN) and LEVEL (RED) as the X and Y of a chart (below), it becomes clearer how various everyday examples of automation fit into the scheme.  As LEVEL and/or TYPE increase, we get a higher degree of automation (dotted line).

Degrees of automation (Adapted from Wickens et al., 2010)
Degrees of automation represented as the dotted line (Adapted from Wickens et al., 2010)

Mainstream discussions of AI and its potential dangers seem to be focusing on a hypothetical ultra-high degree of automation.  A hypothetical weapon that will, on its own, determine threats and act.  There are actually very few examples of such a high level of automation in everyday life because cutting the human completely “out of the loop” can have severely negative human performance consequences.

The figure below shows some examples of automation and where they fit into the scheme:

Approximate degrees of automation of everyday examples of automation
Approximate degrees of automation of everyday examples of automation

Wickens et al., (2010) use the phrase, “the higher they are, the farther they fall.”   This means that when humans interact with greater degrees of automation, they do fine if it works correctly, but will encounter catastrophic consequences when automation fails (and it always will at some point).  Why?  Users get complacent with high DOA automation, they forget how to do the task themselves, or they loose track of what was going on before the automation failed and thus cannot recover from the failure so easily.

You may have experienced a mild form of this if your car has a rear-backup camera.  Have you ever rented a car without one?  How do you feel? That feelings tends to get magnified with higher degrees of automation.

So, highly autonomous weapons (or any high degree of automation) is not only a philosophically bad/evil idea, it is bad for human performance!

 

Haikuman Factors

Sometimes it’s good to take a step back from the seriousness of our work and find new focus. H(aiku)man factors is the brainchild of my colleague Douglas Gillan. Each summarizes a concept in the field while following the haiku form of 5-7-5 and an emphasis on juxtoposition and inclusion of nature. Enjoy and contribute your own in the comments!

H(aik)uman Factors3

H(aik)uman Factors2

H(aik)uman Factors

H(aik)uman Factors6

H(aik)uman Factors5

H(aik)uman Factors4

All of the above are by Doug Gillan.

Other contributions:

Inattentional blindness by Allaire Welk
Unicycling clown
Challenging primary task
Did you notice it?

Affordances by Lawton Pybus
round, smooth ball is thrown
rolls, stops at the flat, wing-back
chair on which I sit

Escalation by Olga Zielinska
headache, blurred vision
do not explore Web MD
it’s not a tumor

Automatic Processing by Anne McLaughlin
end of the workday
finally get to go home
arugh, forgot groceries

Automation by Richard Pak
Siri, directions!
No wait, I’ll get it myself
Drat, I forgot how

Prospective Memory by Natalee Baldwin
I forgot the milk!
Prospective memory failed
Use a reminder

Working Memory by Will Leidheiser
copious knowledge.
how much can I remember?
many things at once.

Radio interview with Rich

Our own Rich Pak was interviewed by the Clemson radio show “Your Day.”

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

They cover everything from the birth of human factors psychology to the design of prospective memory aids for older adults. Enjoy!

Human-Technology Interactions in Health

multisense

Coincidentally, the topic of social/human-technology interaction is in the news quite a bit today.  I’m pleased that the topic of the human factors implications of the social interaction with technology is getting more focus.

First, Dr. Wendy Rogers of Georgia Tech gets interviewed in the New York Times about her work on older adults and in-home helper robots:

Dr. Rogers has been experimenting with a large robot called the PR2, made by Willow Garage, a robotics company in Palo Alto, Calif., which can fetch and administer medicine, a seemingly simple act that demands a great deal of trust between man and machine.

“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.

In a more ambitious use of technology, NPR is reporting that researchers are using computer-generated avatars as interviewers to detect soldiers who are susceptible to suicide. Simultaneously, facial movement patterns of the interviewee are recorded:

“For each indicator,” Morency explains, “we will display three things.” First, the report will show the physical behavior of the person Ellie just interviewed, tallying how many times he or she smiled, for instance, and for how long. Then the report will show how much depressed people typically smile, and finally how much healthy people typically smile. Essentially it’s a visualization of the person’s behavior compared to a population of depressed and non-depressed people.

While this sounds like an interesting application, I have to agree with with one of its critics that:

“It strikes me as unlikely that face or voice will provide that information with such certainty,” he says.

At worst, it will flood the real therapist with a “big data”-type situation where there may be “signal” but way too much noise (see this article).

Anne & Rich Interviewed about Human Factors

Anne and I are big proponents of making sure the world knows what human factors is all about (hence the blog).  Both of us were recently interviewed separately about human factors in general as well as our research areas.

The tone is very general and may give lay people a good sense of the breadth of human factors.  Plus, you can hear how we sound!

First, Anne was just interviewed for the radio show “Radio In Vivo“.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Late last year, I was interviewed about human factors and my research on the local public radio program Your Day:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Begging robots, overly familiar websites, and the power of the unconscious?

Hello readers, and sorry for the unintentional hiatus on the blog. Anne and I have been recovering from the just-completed semester only to be thrown back into another busy semester.  As we adjust, feast on this potpourri post of interesting HF-related items from the past week.

In todays HF potpourri we have three very interesting and loosely related stories:

  • There seems to be a bit of a resurgence in the study of anthropomorphism in HF/computer science primarily because…ROBOTS.  It’s a topic I’ve written about [PDF] in the context of human-automation interaction.  The topic has reached mainstream awareness because NPR just released a story on the topic
  • The BBC looks at the rise of websites that seem to talk to us in a very informal, casual way.  Clearly, the effect on the user is not what was intended:

The difference is the use of my name. I also have a problem with people excessively using my name. I feel it gives them some power over me and overuse implies disingenuousness. Like when you ring a call centre where they seem obsessed with saying your name.

What Apple Maps “PR Disaster” Says about Human-Automation Interaction

With the release of Apple’s in-house developed mapping solution for the new iPhone 5 (and all iOS 6 devices) there has been a major outcry among some users bordering on ridiculous, frothing, outrage1.  

Personally, the maps for my area are pretty good and the route guidance worked well even with no network signal.

However, some of the public reaction to the new mapping program is an excellent example of too much reliance on automation that is usually very reliable but falible (we’ve written about here, and here.).

It is very hard to discern what too much reliance looks like until the automation fails.  Too much reliance means that you do not double-check the route guidance information, or you ignore other external information (e.g., the bridge is out).

I’ve had my own too-much-reliance experience with mobile Google Maps (documented on the blog).  My reaction after failure was to be less trusting which led to decreased reliance (and increased “double checking”).  Apple’s “PR disaster” is a good wake up call about users unreasonably high trust in very reliable automation that can (and will) fail.  Unfortunately, I don’t think it will impact user’s perception that all technology, while seemingly reliable, should not be blindly trusted.

Some human factors lessons here (and interesting research questions for the future) are:

  • How do we tell the user that they need to double check? (aside from a warning)
  • How should the system convey it’s confidence?  (if it is unsure, how do you tell the user so they adjust their unreasonably high expectations)

[NPR]

1I say “outrage” because those users who most needed phone-based voice navigation probably had to own third party apps for it (I used the Garmin app).  The old Google Maps for iPhone never had that functionality.  So the scale of the outrage seems partially media-generated.