Category Archives: automation

Prominent figures warn of dangerous Artificial Intelligence (it’s probably a bad HF idea too)

Recently, some very prominent scientists and other figures have warned of the consequences of autonomous weapons, or more generally artificial intelligence run amok.

The field of artificial intelligence is obviously a computational and engineering problem: designing a machine (i.e., robot) or software that can emulate thinking to a high degree.   But eventually, any AI must interact with a human either by taking control of a situation from a human (e.g., flying a plane) or suggesting courses of action to a human.

I thought this recent news item about potentially dangerous AI might be a great segue to another discussion of human-automation interaction.  Specifically, to a detail that does not frequently get discussed in splashy news articles or by non-human-factors people:  degree of automation. This blog post is heavily informed by a proceedings paper by Wickens, Li, Santamaria, Sebok, and Sarter (2010).

First, to HF researchers, automation is a generic term that encompasses anything that carries out a task that was once done by a human.  Such as robotic assembly, medical diagnostic aids, digital camera scene modes, and even hypothetical autonomous weapons with AI.  These disparate examples simply differ in degree of automation.

Let’s back up for a bit: Automation can be characterized by two independent dimensions:

  • STAGE or TYPE:  What is it doing and how is it doing it?
  • LEVEL: How much it is doing?

Stage/Type of automation describes the WHAT tasks are being automated and sometimes how.  Is the task perceptual, like enhancing vision at night or amplifying certain sounds?  Or is the automation carrying out a task that is more cognitive, like generating the three best ways to get to your destination in the least amount of time?

The second dimension, Level, refers to the balance of tasks shared between the automation and the human; is the automation doing a tiny bit of the task and then leaving the rest to the user?  Or is the automation acting completely on its own with no input from the operator (or ability to override)?

If you imagine STAGE/TYPE (BLUE/GREEN) and LEVEL (RED) as the X and Y of a chart (below), it becomes clearer how various everyday examples of automation fit into the scheme.  As LEVEL and/or TYPE increase, we get a higher degree of automation (dotted line).

Degrees of automation (Adapted from Wickens et al., 2010)
Degrees of automation represented as the dotted line (Adapted from Wickens et al., 2010)

Mainstream discussions of AI and its potential dangers seem to be focusing on a hypothetical ultra-high degree of automation.  A hypothetical weapon that will, on its own, determine threats and act.  There are actually very few examples of such a high level of automation in everyday life because cutting the human completely “out of the loop” can have severely negative human performance consequences.

The figure below shows some examples of automation and where they fit into the scheme:

Approximate degrees of automation of everyday examples of automation
Approximate degrees of automation of everyday examples of automation

Wickens et al., (2010) use the phrase, “the higher they are, the farther they fall.”   This means that when humans interact with greater degrees of automation, they do fine if it works correctly, but will encounter catastrophic consequences when automation fails (and it always will at some point).  Why?  Users get complacent with high DOA automation, they forget how to do the task themselves, or they loose track of what was going on before the automation failed and thus cannot recover from the failure so easily.

You may have experienced a mild form of this if your car has a rear-backup camera.  Have you ever rented a car without one?  How do you feel? That feelings tends to get magnified with higher degrees of automation.

So, highly autonomous weapons (or any high degree of automation) is not only a philosophically bad/evil idea, it is bad for human performance!

 

Haikuman Factors

Sometimes it’s good to take a step back from the seriousness of our work and find new focus. H(aiku)man factors is the brainchild of my colleague Douglas Gillan. Each summarizes a concept in the field while following the haiku form of 5-7-5 and an emphasis on juxtoposition and inclusion of nature. Enjoy and contribute your own in the comments!

H(aik)uman Factors3

H(aik)uman Factors2

H(aik)uman Factors

H(aik)uman Factors6

H(aik)uman Factors5

H(aik)uman Factors4

All of the above are by Doug Gillan.

Other contributions:

Inattentional blindness by Allaire Welk
Unicycling clown
Challenging primary task
Did you notice it?

Affordances by Lawton Pybus
round, smooth ball is thrown
rolls, stops at the flat, wing-back
chair on which I sit

Escalation by Olga Zielinska
headache, blurred vision
do not explore Web MD
it’s not a tumor

Automatic Processing by Anne McLaughlin
end of the workday
finally get to go home
arugh, forgot groceries

Automation by Richard Pak
Siri, directions!
No wait, I’ll get it myself
Drat, I forgot how

Prospective Memory by Natalee Baldwin
I forgot the milk!
Prospective memory failed
Use a reminder

Working Memory by Will Leidheiser
copious knowledge.
how much can I remember?
many things at once.

Radio interview with Rich

Our own Rich Pak was interviewed by the Clemson radio show “Your Day.”

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

They cover everything from the birth of human factors psychology to the design of prospective memory aids for older adults. Enjoy!

Human-Technology Interactions in Health

multisense

Coincidentally, the topic of social/human-technology interaction is in the news quite a bit today.  I’m pleased that the topic of the human factors implications of the social interaction with technology is getting more focus.

First, Dr. Wendy Rogers of Georgia Tech gets interviewed in the New York Times about her work on older adults and in-home helper robots:

Dr. Rogers has been experimenting with a large robot called the PR2, made by Willow Garage, a robotics company in Palo Alto, Calif., which can fetch and administer medicine, a seemingly simple act that demands a great deal of trust between man and machine.

“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.

In a more ambitious use of technology, NPR is reporting that researchers are using computer-generated avatars as interviewers to detect soldiers who are susceptible to suicide. Simultaneously, facial movement patterns of the interviewee are recorded:

“For each indicator,” Morency explains, “we will display three things.” First, the report will show the physical behavior of the person Ellie just interviewed, tallying how many times he or she smiled, for instance, and for how long. Then the report will show how much depressed people typically smile, and finally how much healthy people typically smile. Essentially it’s a visualization of the person’s behavior compared to a population of depressed and non-depressed people.

While this sounds like an interesting application, I have to agree with with one of its critics that:

“It strikes me as unlikely that face or voice will provide that information with such certainty,” he says.

At worst, it will flood the real therapist with a “big data”-type situation where there may be “signal” but way too much noise (see this article).

Anne & Rich Interviewed about Human Factors

Anne and I are big proponents of making sure the world knows what human factors is all about (hence the blog).  Both of us were recently interviewed separately about human factors in general as well as our research areas.

The tone is very general and may give lay people a good sense of the breadth of human factors.  Plus, you can hear how we sound!

First, Anne was just interviewed for the radio show “Radio In Vivo“.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Late last year, I was interviewed about human factors and my research on the local public radio program Your Day:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Begging robots, overly familiar websites, and the power of the unconscious?

Hello readers, and sorry for the unintentional hiatus on the blog. Anne and I have been recovering from the just-completed semester only to be thrown back into another busy semester.  As we adjust, feast on this potpourri post of interesting HF-related items from the past week.

In todays HF potpourri we have three very interesting and loosely related stories:

  • There seems to be a bit of a resurgence in the study of anthropomorphism in HF/computer science primarily because…ROBOTS.  It’s a topic I’ve written about [PDF] in the context of human-automation interaction.  The topic has reached mainstream awareness because NPR just released a story on the topic
  • The BBC looks at the rise of websites that seem to talk to us in a very informal, casual way.  Clearly, the effect on the user is not what was intended:

The difference is the use of my name. I also have a problem with people excessively using my name. I feel it gives them some power over me and overuse implies disingenuousness. Like when you ring a call centre where they seem obsessed with saying your name.

What Apple Maps “PR Disaster” Says about Human-Automation Interaction

With the release of Apple’s in-house developed mapping solution for the new iPhone 5 (and all iOS 6 devices) there has been a major outcry among some users bordering on ridiculous, frothing, outrage1.  

Personally, the maps for my area are pretty good and the route guidance worked well even with no network signal.

However, some of the public reaction to the new mapping program is an excellent example of too much reliance on automation that is usually very reliable but falible (we’ve written about here, and here.).

It is very hard to discern what too much reliance looks like until the automation fails.  Too much reliance means that you do not double-check the route guidance information, or you ignore other external information (e.g., the bridge is out).

I’ve had my own too-much-reliance experience with mobile Google Maps (documented on the blog).  My reaction after failure was to be less trusting which led to decreased reliance (and increased “double checking”).  Apple’s “PR disaster” is a good wake up call about users unreasonably high trust in very reliable automation that can (and will) fail.  Unfortunately, I don’t think it will impact user’s perception that all technology, while seemingly reliable, should not be blindly trusted.

Some human factors lessons here (and interesting research questions for the future) are:

  • How do we tell the user that they need to double check? (aside from a warning)
  • How should the system convey it’s confidence?  (if it is unsure, how do you tell the user so they adjust their unreasonably high expectations)

[NPR]

1I say “outrage” because those users who most needed phone-based voice navigation probably had to own third party apps for it (I used the Garmin app).  The old Google Maps for iPhone never had that functionality.  So the scale of the outrage seems partially media-generated.

App Usability Evaluations for the Mental Health Field

We’ve posted before on usability evaluations of iPads and apps for academics (e.g.,here, and here), but today I’d like to point to a blog dedicated to evaluating apps for mental health professionals.

In the newest post, Dr. Jeff Lawley discusses the usability of a DSM Reference app from Kitty CAT Psych. For those who didn’t take intro psych in college, the DSM is the Diagnostic and Statistical Manual, which classifies symptoms into disorders. It’s interesting to read an expert take on this app – he considers attributes I would not have thought of, such as whether the app retains information (privacy issues).

As Dr. Lawley notes on his “about” page, there are few apps designed for mental health professionals and even fewer evaluations of these apps. Hopefully his blog can fill that niche and inspire designers to create more mobile tools for these professionals.

Everyday Automation: Auto-correct

This humorous NYT article discusses the foibles of auto-correct on computers and phones. Auto-correct, a more advanced type of the old spell checker, is a type of automation. We’ve discussed automation many times on this blog.

But auto-correct is unique in that it’s probably one of the most frequent touchpoints between humans and automation.

The article nicely covers, in lay language, many of the concepts of automation:

Out of the loop syndrome:

Who’s the boss of our fingers? Cyberspace is awash with outrage. Even if hardly anyone knows exactly how it works or where it is, Autocorrect is felt to be haunting our cellphones or watching from the cloud.

Trust:

We are collectively peeved. People blast Autocorrect for mangling their intentions. And they blast Autocorrect for failing to un-mangle them.

I try to type “geocentric” and discover that I have typed “egocentric”; is Autocorrect making a sort of cosmic joke? I want to address my tweeps (a made-up word, admittedly, but that’s what people do). No: I get “twerps.” Some pairings seem far apart in the lexicographical space. “Cuticles” becomes “citified.” “Catalogues” turns to “fatalities” and “Iditarod” to “radiator.” What is the logic?

Reliance:

One more thing to worry about: the better Autocorrect gets, the more we will come to rely on it. It’s happening already. People who yesterday unlearned arithmetic will soon forget how to spell. One by one we are outsourcing our mental functions to the global prosthetic brain.

Humorously, even anthropomorphism of automation (attributing human-like characteristics to it, even unintentially)! (my research area):

Peter Sagal, the host of NPR’s “Wait Wait … Don’t Tell Me!” complains via Twitter: “Autocorrect changed ‘Fritos’ to ‘frites.’ Autocorrect is effete. Pass it on.”

(photo credit el frijole @flickr)

Pilots forget to lower landing gear after cell phone distraction

This is back from May, but it’s worth noting. A news story chock-full of the little events that can add up to disaster!

From the article:

Confused Jetstar pilots forgot to lower the wheels and had to abort a landing in Singapore just 150 metres above the ground, after the captain became distracted by his mobile phone, an investigation has found.

Major points:

  • Pilot forgets to turn off cell phone and receives distracting messages prior to landing.
  • Co-pilot is fatigued.
  •  They do not communicate with each other before taking action.
  •  Another distracting error occurred involving the flap settings on the wings.
  • They do not use the landing checklist.

I was most surprised by that last point – I didn’t know that was optional! Any pilots out there want to weigh in on how frequently checklists are skipped entirely?

 

 

Photo credit slasher-fun @ Flickr