The big news in tech last week was the unveiling of the Apple Watch. I think it is a nice moment to discuss a range of human factors topics. (This topic may elicit strong feelings for or against Apple or the idea of a smartwatch but let’s keep it about the science.)
The first is technology adoption/acceptance. Lots of people were probably scratching their heads asking, “who wears a watch, nowadays?” But you do see lots of people wearing fitness bands. Superficially, that contrast seems to demonstrate the Technology Acceptance Model (TAM) in action. TAM is a way to try to understand when people will adopt new technology. It boils down the essential factors to usability (does it seem easy to use?) and usefulness (does it seem like it will help my work or life?).
Fitness bands check both of the above boxes: since they are essentially single-function devices they are relatively easy to use and tracking fitness is perceived as useful for many people.
Back to the Watch, it may also check off both of the above boxes: it certainly appears easy to use (but we do not know yet), and because it has fitness tracking functions plus many others via apps it certainly may be perceived as useful to the same crowd that buys fitness bands.
The next topic that got me excited was the discussion of the so-called digital crown (shown below). Anne and I have previously studied the contrasts between touch screens and rotary knobs for a variety of computing tasks. Having both choices allows the user select the best input device for the task: touch for pushing big on-screen buttons and large-scale movement and knob for precise, linear movement without obscuring the screen. Using a knob is certainly easier than a touch screen if you have shaky hands or are riding a bumpy cab.
Two small items of note that were included in the Watch was the use of the two-finger gesture on the watch face to send a heart beat to another user–the same gesture many people intuitively think of when they want to feel their own heart beat.
Finally, the Watch has the ability to send animated emoij to other users. What was noteworthy is the ability to manipulate both eyes and mouth in emoji characters. I couldn’t find any literature but I recall somewhere that there is some cross-cultural differences in how people use and interpret emoji: Western users tend to focus on the mouth while Eastern users tend to focus on the eyes (if you know what reference I’m talking about or if I’m mis-remembering, feel free to comment).
There’s so much I haven’t brought up (haptic and multi-modal feedback, user interface design, automation, voice input and of course privacy)!
This guest post is from graduate students Haley Vaigneur and Bliss Altenhoff. Haley and Bliss compared the usability of two fitness trackers as part of a graduate course in health informatics taught by Kelly Caine.
Wearable fitness trackers allow users to track and monitor their health. While these devices originated as a way for doctors to monitor chronically ill patients’ vitals, they have recently been developed and marketed for to a more general, health-conscious market. Equipped with advanced sensors such as accelerometers, users’ activity and sleep can be automatically tracked and then compared with their logged fitness goals and daily diet. Users can then use their statistics to help create or maintain a healthier lifestyle. Two examples of such devices are the Jawbone Up and Fitbit Flex, shown above.
Wearable technology is popular and has the potential to dramatically impact health (e.g. long-term health and activity data tracking, immediate syncing with Electronic Health Records (EHRs)). But these benefits can only be realized if the user is able to effectively use and understand these devices. This was the motivation for focusing on two of the most popular models of fitness trackers: the JawBone Up and FitBit Flex and their accompanying smartphone apps.
This study examined the usability of these two devices and their accompanying smartphone apps by having 14 participants (7 for Jawbone Up, 7 for FitBit Flex) perform a think-aloud test on five key features: Setup, Setting Goals, Tracking Diet, Tracking Activity, and Setting an Alarm. Participants then kept the wearable for three days and were encouraged to incorporate it into their normal routine. On the third day, participants completed the System Usability Scale survey and an informal interview regarding their experiences using the wearable.
Some of the key Jawbone UP findings were:
Adding food or drink items was somewhat difficult due to unintuitive organization and unpredictable bugs. For example, one participant attempted to add a food item by scanning the bar code of a Lunchable, but the app added a Dr. Pepper to the log.
Participants struggled to find the alarm settings, with one conducting a general web search for help to understand the Smart Sleep Window settings and how to save alarm settings.
None of the participants were able to figure out how to communicate to the band or app that they would like to begin a workout. They didn’t realize that the Stopwatch menu option was intended to time the workout.
Some of the key findings of the FitBit Flex were:
Participants felt that the wristband (when using the appropriate sized band) was not uncomfortable or revealing and they were proud to wear it because it made them feel healthy.
Users had a difficult time figuring out where to go on the app to set their health goals at first. Their instinct was to find it on the app homepage, or Dashboard, but it was under the Account tab.
Some users had difficulty putting on the wristband, and several noted that it fell off unexpectedly. Users were also confused about where to “tap” the wristband to activate it, based on the instructions given in the app. The picture can appear to instruct the user to tap below the black screen, when the user actually needs to tap the screen directly, and firmly.
Users did not realize that after turning Bluetooth on their phone, they needed to return to the app to tell the phone and wristband to begin syncing. They also noted that leaving Bluetooth on all day drained their phone battery.
Based on time per task and number of errors the FitBit Flex performed better than the Jawbone Up on the five tasks. Users’ ultimate trust in the data, willingness to continue using the wearable, and general satisfaction with each wearable was heavily influenced by their initial experiences (first day). The positive initial think-aloud results for the FitBit Flex were also consistent with a more positive later experience and stronger acceptance of the wearable.
This study found that there is still much room for improvement in the usability of the accompanying smartphone apps. A major concern for these kinds of devices is keeping user interest and motivation, which can easily be lost through confusing or cumbersome designs. By striving to improve the human factors of the apps simultaneous to the capabilities of the actual wearables, there is great potential for greater user satisfaction, and thus more long-term use.
While activity tracking wearables are currently most popular with more tech-savvy, active people, these devices should be designed to be used by all ages and levels of experience users. These devices could change health monitoring drastically and give people the power and ability to make better choices, and live healthier lifestyles.
Haley Vaigneur is a graduate student in Industrial Engineering at Clemson University. Her concentration is Human Factors and Ergonomics, emphasizing on research in the healthcare field.
Bliss Altenhoff is a Doctoral Candidate studying Human Factors Psychology at Clemson University, where she received her M.S. in Applied Psychology in 2012. She is a member of the Perception and Action (PAC) lab, where her research is concentrated on enhancing human perception and performance by enriching perceptual display technologies for laparoscopic surgeons. .
ACKNOWLEDGMENTS This material is based upon work supported by the National Science Foundation under Grant No. 1314342. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Anne sent me an example of, “why haven’t they thought of this before”: an air vent with the temperature display and control knob all in one.
In this article describing the new Audi TT with glass dashboard, they describe the novel control/display/air vent seen in the image above. I guess one problem here is if it is accessible to only the driver or if it’s centrally located.
The dashboard (shown in the linked article), however, is another story. While it looks futuristic, it looks like a distraction nightmare!
This clip of Fox News’ new studio has been tearing up the internet. But what caught my eye was the touchscreen lag and general unresponsiveness/accidental touches of the users in the background (see image at top; video here). Starting at the 10 second mark, note the user on the right.
I recently came across two ways in which users can interact with 3D objects. The first is Elon Musk manipulating a rocket model using gestures (via Universe Today). The second is a very cool way to create 3D models from 2D images (via Kottke.org).
Coincidentally, the topic of social/human-technology interaction is in the news quite a bit today. I’m pleased that the topic of the human factors implications of the social interaction with technology is getting more focus.
Dr. Rogers has been experimenting with a large robot called the PR2, made by Willow Garage, a robotics company in Palo Alto, Calif., which can fetch and administer medicine, a seemingly simple act that demands a great deal of trust between man and machine.
“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.
In a more ambitious use of technology, NPR is reporting that researchers are using computer-generated avatars as interviewers to detect soldiers who are susceptible to suicide. Simultaneously, facial movement patterns of the interviewee are recorded:
“For each indicator,” Morency explains, “we will display three things.” First, the report will show the physical behavior of the person Ellie just interviewed, tallying how many times he or she smiled, for instance, and for how long. Then the report will show how much depressed people typically smile, and finally how much healthy people typically smile. Essentially it’s a visualization of the person’s behavior compared to a population of depressed and non-depressed people.
While this sounds like an interesting application, I have to agree with with one of its critics that:
“It strikes me as unlikely that face or voice will provide that information with such certainty,” he says.
At worst, it will flood the real therapist with a “big data”-type situation where there may be “signal” but way too much noise (see this article).
I had heard that the Tesla Model S (the luxury electric car) had a giant touch screen as one of the main interfaces for secondary car functions and always wondered what that might be like from a human factors/usability perspective. Physical knobs and switches, unlike interface widgets, give a tactile sensation and do not change location on the dashboard.
This post is an interesting examination of the unique dashboard:
Think about a car’s dashboard for a second. It’s populated with analog controls: dials, knobs, and levers, all of which control some car subsystem such as temperature, audio, or navigation. These analog dials, while old, have two features: tactility and physical analogy. Respectively, this means you can feel for a control, and you have an intuition for how the control’s mechanical action affects your car (eg: counterclockwise on AC increases temperature). These small functions provide a very, very important feature: they allow the driver to keep his or her eyes on the road.
Except for a the privileged few that have extraordinary kinesthetic sense of where our hands are, the Model S’s control scheme is an accident waiting to happen. Hell, most of us can barely type with two hands on an iPhone. Now a Model S driver has to manage all car subsystems on a touchscreen with one hand while driving.
The solution, however, is may not be heads-up displays or augmented reality, as the author suggests (citing the HUD in the BMW).
While those displays allow the eye to remain on the road it’s always in the way–a persistent distraction. Also, paying attention to the HUD means your attention will not be on the road–and what doesn’t get paid attention to doesn’t exist:
Anne and I are big proponents of making sure the world knows what human factors is all about (hence the blog). Both of us were recently interviewed separately about human factors in general as well as our research areas.
The tone is very general and may give lay people a good sense of the breadth of human factors. Plus, you can hear how we sound!
First, Anne was just interviewed for the radio show “Radio In Vivo“.
Late last year, I was interviewed about human factors and my research on the local public radio program Your Day:
What does pop music visualization and neural imaging techniques have in common? Keep reading…You may have already seen this (i’m a little late) but have you ever wanted your favorite song to last forever? Enter “The Infinite Jukebox“.
You upload your favorite MP3 (or select among recent uploads) and the site will analyze and parse the beats. When you hit play it will smoothly jump to another part of the song that sounds similar so there is no end. That alone is cool, but the visualization of the process of playing and more importantly jumping to another section is surprisingly effective. When a possible beat intersection is reached, an arc spans the circle and (randomly) jumps or stays.
The effect works best for some songs and not others. You can get a nice at-a-glance view of the global organization of the song (highly locally repetitive like Daft Punk) or more globally repetitive (like a typical highly structured pop song):
It is probably by design that these diagrams look just like connectomes that map the neural pathways in the brain: