Category Archives: automation

Anne & Rich Interviewed about Human Factors

Anne and I are big proponents of making sure the world knows what human factors is all about (hence the blog).  Both of us were recently interviewed separately about human factors in general as well as our research areas.

The tone is very general and may give lay people a good sense of the breadth of human factors.  Plus, you can hear how we sound!

First, Anne was just interviewed for the radio show “Radio In Vivo“.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Late last year, I was interviewed about human factors and my research on the local public radio program Your Day:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Begging robots, overly familiar websites, and the power of the unconscious?

Hello readers, and sorry for the unintentional hiatus on the blog. Anne and I have been recovering from the just-completed semester only to be thrown back into another busy semester.  As we adjust, feast on this potpourri post of interesting HF-related items from the past week.

In todays HF potpourri we have three very interesting and loosely related stories:

  • There seems to be a bit of a resurgence in the study of anthropomorphism in HF/computer science primarily because…ROBOTS.  It’s a topic I’ve written about [PDF] in the context of human-automation interaction.  The topic has reached mainstream awareness because NPR just released a story on the topic
  • The BBC looks at the rise of websites that seem to talk to us in a very informal, casual way.  Clearly, the effect on the user is not what was intended:

The difference is the use of my name. I also have a problem with people excessively using my name. I feel it gives them some power over me and overuse implies disingenuousness. Like when you ring a call centre where they seem obsessed with saying your name.

What Apple Maps “PR Disaster” Says about Human-Automation Interaction

With the release of Apple’s in-house developed mapping solution for the new iPhone 5 (and all iOS 6 devices) there has been a major outcry among some users bordering on ridiculous, frothing, outrage1.  

Personally, the maps for my area are pretty good and the route guidance worked well even with no network signal.

However, some of the public reaction to the new mapping program is an excellent example of too much reliance on automation that is usually very reliable but falible (we’ve written about here, and here.).

It is very hard to discern what too much reliance looks like until the automation fails.  Too much reliance means that you do not double-check the route guidance information, or you ignore other external information (e.g., the bridge is out).

I’ve had my own too-much-reliance experience with mobile Google Maps (documented on the blog).  My reaction after failure was to be less trusting which led to decreased reliance (and increased “double checking”).  Apple’s “PR disaster” is a good wake up call about users unreasonably high trust in very reliable automation that can (and will) fail.  Unfortunately, I don’t think it will impact user’s perception that all technology, while seemingly reliable, should not be blindly trusted.

Some human factors lessons here (and interesting research questions for the future) are:

  • How do we tell the user that they need to double check? (aside from a warning)
  • How should the system convey it’s confidence?  (if it is unsure, how do you tell the user so they adjust their unreasonably high expectations)

[NPR]

1I say “outrage” because those users who most needed phone-based voice navigation probably had to own third party apps for it (I used the Garmin app).  The old Google Maps for iPhone never had that functionality.  So the scale of the outrage seems partially media-generated.

App Usability Evaluations for the Mental Health Field

We’ve posted before on usability evaluations of iPads and apps for academics (e.g.,here, and here), but today I’d like to point to a blog dedicated to evaluating apps for mental health professionals.

In the newest post, Dr. Jeff Lawley discusses the usability of a DSM Reference app from Kitty CAT Psych. For those who didn’t take intro psych in college, the DSM is the Diagnostic and Statistical Manual, which classifies symptoms into disorders. It’s interesting to read an expert take on this app – he considers attributes I would not have thought of, such as whether the app retains information (privacy issues).

As Dr. Lawley notes on his “about” page, there are few apps designed for mental health professionals and even fewer evaluations of these apps. Hopefully his blog can fill that niche and inspire designers to create more mobile tools for these professionals.

Everyday Automation: Auto-correct

This humorous NYT article discusses the foibles of auto-correct on computers and phones. Auto-correct, a more advanced type of the old spell checker, is a type of automation. We’ve discussed automation many times on this blog.

But auto-correct is unique in that it’s probably one of the most frequent touchpoints between humans and automation.

The article nicely covers, in lay language, many of the concepts of automation:

Out of the loop syndrome:

Who’s the boss of our fingers? Cyberspace is awash with outrage. Even if hardly anyone knows exactly how it works or where it is, Autocorrect is felt to be haunting our cellphones or watching from the cloud.

Trust:

We are collectively peeved. People blast Autocorrect for mangling their intentions. And they blast Autocorrect for failing to un-mangle them.

I try to type “geocentric” and discover that I have typed “egocentric”; is Autocorrect making a sort of cosmic joke? I want to address my tweeps (a made-up word, admittedly, but that’s what people do). No: I get “twerps.” Some pairings seem far apart in the lexicographical space. “Cuticles” becomes “citified.” “Catalogues” turns to “fatalities” and “Iditarod” to “radiator.” What is the logic?

Reliance:

One more thing to worry about: the better Autocorrect gets, the more we will come to rely on it. It’s happening already. People who yesterday unlearned arithmetic will soon forget how to spell. One by one we are outsourcing our mental functions to the global prosthetic brain.

Humorously, even anthropomorphism of automation (attributing human-like characteristics to it, even unintentially)! (my research area):

Peter Sagal, the host of NPR’s “Wait Wait … Don’t Tell Me!” complains via Twitter: “Autocorrect changed ‘Fritos’ to ‘frites.’ Autocorrect is effete. Pass it on.”

(photo credit el frijole @flickr)

Pilots forget to lower landing gear after cell phone distraction

This is back from May, but it’s worth noting. A news story chock-full of the little events that can add up to disaster!

From the article:

Confused Jetstar pilots forgot to lower the wheels and had to abort a landing in Singapore just 150 metres above the ground, after the captain became distracted by his mobile phone, an investigation has found.

Major points:

  • Pilot forgets to turn off cell phone and receives distracting messages prior to landing.
  • Co-pilot is fatigued.
  •  They do not communicate with each other before taking action.
  •  Another distracting error occurred involving the flap settings on the wings.
  • They do not use the landing checklist.

I was most surprised by that last point – I didn’t know that was optional! Any pilots out there want to weigh in on how frequently checklists are skipped entirely?

 

 

Photo credit slasher-fun @ Flickr

Who’s responsible when the robot (or automation) is wrong?

Interesting research (PDF link) on how people behave when robots are wrong. In a recent paper, researchers created a situation where a robot mis-directed a human in a game. In follow-up interviews, one of the striking findings that caught my eye was:

When asked whether Robovie was a living being, a technology, or something in-between, participants were about evenly split between “in-between” (52.5%) and “technological” (47.5%). In contrast, when asked the same question about a vending machine and a human, 100% responded that the vending machine was “technological,” 90% said that a human was a “living being,” and 10% viewed a human as “in-between.”

The bottom line was that a large portion of the subjects attributed some moral/social responsibility to this machine.

Taken broadly, the results from this study – based on both behavioral and reasoning data – support the proposition that in the years to come many people will develop substantial and meaningful social relationships with humanoid robots.

Here is a short video clip of how one participant reacted upon discovering Robovie’s error.

I wonder if similar results would be found when people interact with (and make attributions to) less overtly humanoid systems (disembodied automated systems like a smartphone app).

(via Slate)

Development of the ground proximity warning system for aviation

This article tells the story of inspiration for and creation of a “ground proximity warning” system for pilots, as well as multiple other types of cockpit warnings. Don’t miss the video embedded as a picture in the article! It has the best details!

Some choice excerpts:

About 3.5 miles out from the snow-covered rock face, a red light flashed on the instrument panel and a recorded voice squawked loudly from a speaker.

“Caution — Terrain. Caution — Terrain.”

The pilot ignored it. Just a minute away from hitting the peaks, he held a steady course.

Ten seconds later, the system erupted again, repeating the warning in a more urgent voice.

The pilot still flew on. Snow and rock loomed straight ahead.

Suddenly the loud command became insistent.

“Terrain. Pull up! Pull up! Pull up! Pull up! Pull up!”

Accidents involving controlled flight into terrain still happen, particularly in smaller turboprop aircraft. During the past five years, there have been 50 such accidents, according to Flight Safety Foundation data.

But since the 1990s, the foundation has logged just two in aircraft equipped with Bateman’s enhanced system — one in a British Aerospace BAe-146 cargo plane in Indonesia in 2009; one in an Airbus A321 passenger jet in Pakistan in 2010.

In both cases, the cockpit voice recorder showed the system gave the pilots more than 30 seconds of repeated warnings of the impending collisions, but for some reason the pilots ignored them until too late.

After a Turkish Airlines 737 crashed into the ground heading into Amsterdam in 2009, investigators discovered the pilots were unaware until too late that their air speed was dangerously low on approach. Honeywell added a “low-airspeed” warning to its system, now basic on new 737s.

For the past decade, Bateman has worked on ways of avoiding runway accidents by compiling precise location data on virtually every runway in the world.

New automation will warn drivers of lane changes

Ford is introducing a system that first warns of a lane change, then actually changes the direction of the car if the warning is ignored. From the USA Today article:

When the system detects the car is approaching the edge of the lane without a turn signal activated, the lane marker in the icon turns yellow and the steering wheel vibrates to simulate driving over rumble strips. If the driver doesn’t respond and continues to drift, the lane icon turns red and EPAS will nudge the steering and the vehicle back toward the center of the lane. If the car continues to drift, the vibration is added again along with the nudge. The driver can overcome assistance and vibration at any time by turning the steering wheel, accelerating or braking.

Is this going to be as annoying as having Rich Pak’s phone beep every time I go over the speed limit (which is A LOT)? Just kidding – stopping a drifting car could be pretty great.

 

LOLcat photo credit to ClintCJL at Flickr.

Virtual Assistants (automation) and Etiquette

This NYT article discusses the “new” scourge of rude people interacting with their phones in public via voice thanks in large part to Siri, Apple’s new virtual assistant.

This article reminded me of something slightly different about human interaction with virtual assistants or automation. In a 2004 paper, researchers Parasuraman and Miller wondered if automation that possessed human-like qualities would cause people to alter their behavior.

They compared automation that made suggestions in a polite way or a rude way (always interrupting you). As you might expect, automation that was polite elicited higher ratings of trust and dependence.

This might be one reason why Siri has a playful, almost human-like personality instead of a robot servant that merely carries out your commands. The danger is that with assistants that are perceived as human-like, people will raise their expectations to unreasonable levels. Like mistakenly ascribing political motivations to it.

Lastly, the graph shown below was in the latest issue of Wired magazine.  I think it’s a nice compliment to the perceived reliability graph we showed in a previous post: