All posts by Richard Pak

Associate Professor at Clemson University/Department of Psychology

Apple Watch Human Factors

watchThe big news in tech last week was the unveiling of the Apple Watch. I think it is a nice moment to discuss a range of human factors topics. (This topic may elicit strong feelings for or against Apple or the idea of a smartwatch but let’s keep it about the science.)

The first is technology adoption/acceptance. Lots of people were probably scratching their heads asking, “who wears a watch, nowadays?” But you do see lots of people wearing fitness bands. Superficially, that contrast seems to demonstrate the Technology Acceptance Model (TAM) in action.  TAM is a way to try to understand when people will adopt new technology. It boils down the essential factors to usability (does it seem easy to use?) and usefulness (does it seem like it will help my work or life?).

Fitness bands check both of the above boxes: since they are essentially single-function devices they are relatively easy to use and tracking fitness is perceived as useful for many people.

Back to the Watch, it may also check off both of the above boxes: it certainly appears easy to use (but we do not know yet), and because it has fitness tracking functions plus many others via apps it certainly may be perceived as useful to the same crowd that buys fitness bands.

The next topic that got me excited was the discussion of the so-called digital crown (shown below). Anne and I have previously studied the contrasts between touch screens and rotary knobs for a variety of computing tasks. Having both choices allows the user select the best input device for the task: touch for pushing big on-screen buttons and large-scale movement and knob for precise, linear movement without obscuring the screen. Using a knob is certainly easier than a touch screen if you have shaky hands or are riding a bumpy cab.

IMG_1082IMG_1083

Two small items of note that were included in the Watch was the use of the two-finger gesture on the watch face to send a heart beat to another user–the same gesture many people intuitively think of when they want to feel their own heart beat.

Finally, the Watch has the ability to send animated emoij to other users. What was noteworthy is the ability to manipulate both eyes and mouth in emoji characters. I couldn’t find any literature but I recall somewhere that there is some cross-cultural differences in how people use and interpret emoji: Western users tend to focus on the mouth while Eastern users tend to focus on the eyes (if you know what reference I’m talking about or if I’m mis-remembering, feel free to comment).

IMG_1084IMG_1085

 

There’s so much I haven’t brought up (haptic and multi-modal feedback, user interface design, automation, voice input and of course privacy)!

 

 

Interesting control/display

Anne sent me an example of, “why haven’t they thought of this before”: an air vent with the temperature display and control knob all in one.

In this article describing the new Audi TT with glass dashboard, they describe the novel control/display/air vent seen in the image above. I guess one problem here is if it is accessible to only the driver or if it’s centrally located.

20140318-175623.jpg

The dashboard (shown in the linked article), however, is another story. While it looks futuristic, it looks like a distraction nightmare!

The Future Newsroom (bad touchscreen lag)

fox

This clip of Fox News’ new studio has been tearing up the internet. But what caught my eye was the touchscreen lag and general unresponsiveness/accidental touches of the users in the background (see image at top; video here). Starting at the 10 second mark, note the user on the right.

Potpourri–Lazy Summer Edition

It’s summer and we (along with some of you) are taking a break.  But here’s a list of interesting usability/HF-related things that have crossed my path:

  • After much complaining, Ford is bringing back physical knobs in their MyTouch in-car controls.  Anne and I worked on some research (PDF) in our past lives as graduate students that directly compared touch-only interfaces to knob-based interfaces so it’s nice to see it is still a major issue; if only Ford read our 9 year old paper 🙂
  • Trucks driving under very low bridges is such a large problem in Australia that they are deploying a really novel and clever warning system.  A waterfall that projects a sign that’s hard to miss!
  • tags_finderApple will introduce their next version of OSX in the fall. One of the features i’m most excited about is system-level tag support.  Tags allow users to organize their files regardless of location or type.  I’m particularly interested in personal, single-user-generated tagging (compared to collaborative tagging like that used in flickr) as it appears to benefit older adults information organization and retrieval (PDF).  This pleases me.

Human-Technology Interactions in Health

multisense

Coincidentally, the topic of social/human-technology interaction is in the news quite a bit today.  I’m pleased that the topic of the human factors implications of the social interaction with technology is getting more focus.

First, Dr. Wendy Rogers of Georgia Tech gets interviewed in the New York Times about her work on older adults and in-home helper robots:

Dr. Rogers has been experimenting with a large robot called the PR2, made by Willow Garage, a robotics company in Palo Alto, Calif., which can fetch and administer medicine, a seemingly simple act that demands a great deal of trust between man and machine.

“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.

In a more ambitious use of technology, NPR is reporting that researchers are using computer-generated avatars as interviewers to detect soldiers who are susceptible to suicide. Simultaneously, facial movement patterns of the interviewee are recorded:

“For each indicator,” Morency explains, “we will display three things.” First, the report will show the physical behavior of the person Ellie just interviewed, tallying how many times he or she smiled, for instance, and for how long. Then the report will show how much depressed people typically smile, and finally how much healthy people typically smile. Essentially it’s a visualization of the person’s behavior compared to a population of depressed and non-depressed people.

While this sounds like an interesting application, I have to agree with with one of its critics that:

“It strikes me as unlikely that face or voice will provide that information with such certainty,” he says.

At worst, it will flood the real therapist with a “big data”-type situation where there may be “signal” but way too much noise (see this article).

Recent developments in in-vehicle distractions: Voice input no better than manual input

A man uses a cell phone while driving in Burbank, California June 25, 2008. Credit: Reuters/Fred Prouser
Earlier this week the United States Department of Transportation released  guidelines for automakers designed to reduce the distractibility of in-vehicle technologies (e.g., navigation systems). :

The guidelines include recommendations to limit the time a driver must take his eyes off the road to perform any task to two seconds at a time and twelve seconds total.

The recommendations outlined in the guidelines are consistent with the findings of a new NHTSA naturalistic driving study, The Impact of Hand-Held and Hands-Free Cell Phone Use on Driving Performance and Safety Critical Event Risk. The study showed that visual-manual tasks associated with hand-held phones and other portable devices increased the risk of getting into a crash by three times. [emphasis added]

But a new study (I have not read the paper yet) seems to show that even when you take away the “manual” aspect through voice input, the danger is not mitigated:

The study by the Texas Transportation Institute at Texas A&M University was the first to compare voice-to-text and traditional texting on a handheld device in an actual driving environment.

“In each case, drivers took about twice as long to react as they did when they weren’t texting,” Christine Yager, who headed the study, told Reuters. “Eye contact to the roadway also decreased, no matter which texting method was used.”