Category Archives: technology

Human-Technology Interactions in Health

multisense

Coincidentally, the topic of social/human-technology interaction is in the news quite a bit today.  I’m pleased that the topic of the human factors implications of the social interaction with technology is getting more focus.

First, Dr. Wendy Rogers of Georgia Tech gets interviewed in the New York Times about her work on older adults and in-home helper robots:

Dr. Rogers has been experimenting with a large robot called the PR2, made by Willow Garage, a robotics company in Palo Alto, Calif., which can fetch and administer medicine, a seemingly simple act that demands a great deal of trust between man and machine.

“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.

In a more ambitious use of technology, NPR is reporting that researchers are using computer-generated avatars as interviewers to detect soldiers who are susceptible to suicide. Simultaneously, facial movement patterns of the interviewee are recorded:

“For each indicator,” Morency explains, “we will display three things.” First, the report will show the physical behavior of the person Ellie just interviewed, tallying how many times he or she smiled, for instance, and for how long. Then the report will show how much depressed people typically smile, and finally how much healthy people typically smile. Essentially it’s a visualization of the person’s behavior compared to a population of depressed and non-depressed people.

While this sounds like an interesting application, I have to agree with with one of its critics that:

“It strikes me as unlikely that face or voice will provide that information with such certainty,” he says.

At worst, it will flood the real therapist with a “big data”-type situation where there may be “signal” but way too much noise (see this article).

Usability of a Glass Dashboard?

0-IlNLQ5pqXUI5Emfk

I had heard that the Tesla Model S (the luxury electric car) had a giant touch screen as one of the main interfaces for secondary car functions and always wondered what that might be like from a human factors/usability perspective. Physical knobs and switches, unlike interface widgets, give a tactile sensation and do not change location on the dashboard.

This post is an interesting examination of the unique dashboard:

Think about a car’s dashboard for a second. It’s populated with analog controls: dials, knobs, and levers, all of which control some car subsystem such as temperature, audio, or navigation. These analog dials, while old, have two features: tactility and physical analogy. Respectively, this means you can feel for a control, and you have an intuition for how the control’s mechanical action affects your car (eg: counterclockwise on AC increases temperature). These small functions provide a very, very important feature: they allow the driver to keep his or her eyes on the road.

Except for a the privileged few that have extraordinary kinesthetic sense of where our hands are, the Model S’s control scheme is an accident waiting to happen. Hell, most of us can barely type with two hands on an iPhone. Now a Model S driver has to manage all car subsystems on a touchscreen with one hand while driving.

The solution, however, is may not be heads-up displays or augmented reality, as the author suggests (citing the HUD in the BMW).

0-nbDjWV_lIC2DOUnC

While those displays allow the eye to remain on the road it’s always in the way–a persistent distraction. Also, paying attention to the HUD means your attention will not be on the road–and what doesn’t get paid attention to doesn’t exist:

Anne & Rich Interviewed about Human Factors

Anne and I are big proponents of making sure the world knows what human factors is all about (hence the blog).  Both of us were recently interviewed separately about human factors in general as well as our research areas.

The tone is very general and may give lay people a good sense of the breadth of human factors.  Plus, you can hear how we sound!

First, Anne was just interviewed for the radio show “Radio In Vivo“.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Late last year, I was interviewed about human factors and my research on the local public radio program Your Day:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Effective visualization of an ongoing process

What does pop music visualization and neural imaging techniques have in common?  Keep reading…You may have already seen this (i’m a little late) but have you ever wanted your favorite song to last forever?  Enter “The Infinite Jukebox“.

You upload your favorite MP3 (or select among recent uploads) and the site will analyze and parse the beats.  When you hit play it will smoothly jump to another part of the song that sounds similar so there is no end.  That alone is cool, but the visualization of the process of playing and more importantly jumping to another section is surprisingly effective.  When a possible beat intersection is reached, an arc spans the circle and (randomly) jumps or stays.

The effect works best for some songs and not others.  You can get a nice at-a-glance view of the global organization of the song (highly locally repetitive like Daft Punk) or more globally repetitive (like a typical highly structured pop song):

It is probably by design that these diagrams look just like connectomes that map the neural pathways in the brain:

More on the circular diagrams of connectomes and the software used to make them (Circos).

HF/Usability Potpourri

It’s the return of HF/Potpourri:

Goodbye Mouse?

Story in the Washington Post about the impending demise of the computer mouse in favor of touch screens:

“Most children here have never seen a computer mouse,” said Hannah Tenpas, 24, a kindergarten teacher at San Antonio.

“The popularity of iPads and other tablets is changing how society interacts with information,” said Aniket Kittur, an assistant professor at the Human-Computer Interaction Institute at Carnegie Mellon University. “. . . Direct manipulation with our fingers, rather than mediated through a keyboard/mouse, is intuitive and easy for children to grasp.”

I realize the media needs a strong narrative to make an interesting story but the mouse is nowhere near dead.  The story is more complicated and completely depends on the task.  There are certain applications where the precise pointing afforded by mice are just too cumbersome with touch screens.

The article also has a great graphic describing how touch screens work and a short retrospective of input devices.

(post image from flickr user aperturismo)

Prescription Smartphone Apps

I recently published a study (conducted last year) on automation trust and dependence. In that study, we pseudo-wizard-of-oz’ed a smartphone app that would help diabetics manage their condition.

We had to fake it because there was no such app and it would be to onerous to program it (and we weren’t necessarily interested in the app, just a form of advanced, non-existent automation).

Now, that app is real.  I had nothing to do with it but there are now apps that can help diabetics manage their condition.  This NYT article discusses the complex area of healthcare apps:

Smartphone apps already fill the roles of television remotes, bike speedometers and flashlights. Soon they may also act as medical devices, helping patients monitor their heart rate or manage their diabetes, and be paid for by insurance.

The idea of medically prescribed apps excites some people in the health care industry, who see them as a starting point for even more sophisticated applications that might otherwise never be built. But first, a range of issues — around vetting, paying for and monitoring the proper use of such apps — needs to be worked out.

The focus of the article is on regulatory hurdles while our focus (in the paper) was how potential patients might accept and react to advice given by a smartphone app.

(photo: Ozier Muhammad/The New York Times)

Everyday Automation: Auto-correct

This humorous NYT article discusses the foibles of auto-correct on computers and phones. Auto-correct, a more advanced type of the old spell checker, is a type of automation. We’ve discussed automation many times on this blog.

But auto-correct is unique in that it’s probably one of the most frequent touchpoints between humans and automation.

The article nicely covers, in lay language, many of the concepts of automation:

Out of the loop syndrome:

Who’s the boss of our fingers? Cyberspace is awash with outrage. Even if hardly anyone knows exactly how it works or where it is, Autocorrect is felt to be haunting our cellphones or watching from the cloud.

Trust:

We are collectively peeved. People blast Autocorrect for mangling their intentions. And they blast Autocorrect for failing to un-mangle them.

I try to type “geocentric” and discover that I have typed “egocentric”; is Autocorrect making a sort of cosmic joke? I want to address my tweeps (a made-up word, admittedly, but that’s what people do). No: I get “twerps.” Some pairings seem far apart in the lexicographical space. “Cuticles” becomes “citified.” “Catalogues” turns to “fatalities” and “Iditarod” to “radiator.” What is the logic?

Reliance:

One more thing to worry about: the better Autocorrect gets, the more we will come to rely on it. It’s happening already. People who yesterday unlearned arithmetic will soon forget how to spell. One by one we are outsourcing our mental functions to the global prosthetic brain.

Humorously, even anthropomorphism of automation (attributing human-like characteristics to it, even unintentially)! (my research area):

Peter Sagal, the host of NPR’s “Wait Wait … Don’t Tell Me!” complains via Twitter: “Autocorrect changed ‘Fritos’ to ‘frites.’ Autocorrect is effete. Pass it on.”

(photo credit el frijole @flickr)

Who’s responsible when the robot (or automation) is wrong?

Interesting research (PDF link) on how people behave when robots are wrong. In a recent paper, researchers created a situation where a robot mis-directed a human in a game. In follow-up interviews, one of the striking findings that caught my eye was:

When asked whether Robovie was a living being, a technology, or something in-between, participants were about evenly split between “in-between” (52.5%) and “technological” (47.5%). In contrast, when asked the same question about a vending machine and a human, 100% responded that the vending machine was “technological,” 90% said that a human was a “living being,” and 10% viewed a human as “in-between.”

The bottom line was that a large portion of the subjects attributed some moral/social responsibility to this machine.

Taken broadly, the results from this study – based on both behavioral and reasoning data – support the proposition that in the years to come many people will develop substantial and meaningful social relationships with humanoid robots.

Here is a short video clip of how one participant reacted upon discovering Robovie’s error.

I wonder if similar results would be found when people interact with (and make attributions to) less overtly humanoid systems (disembodied automated systems like a smartphone app).

(via Slate)