Anne sent me an example of, “why haven’t they thought of this before”: an air vent with the temperature display and control knob all in one.
In this article describing the new Audi TT with glass dashboard, they describe the novel control/display/air vent seen in the image above. I guess one problem here is if it is accessible to only the driver or if it’s centrally located.
The dashboard (shown in the linked article), however, is another story. While it looks futuristic, it looks like a distraction nightmare!
This clip of Fox News’ new studio has been tearing up the internet. But what caught my eye was the touchscreen lag and general unresponsiveness/accidental touches of the users in the background (see image at top; video here). Starting at the 10 second mark, note the user on the right.
I recently came across two ways in which users can interact with 3D objects. The first is Elon Musk manipulating a rocket model using gestures (via Universe Today). The second is a very cool way to create 3D models from 2D images (via Kottke.org).
Coincidentally, the topic of social/human-technology interaction is in the news quite a bit today. I’m pleased that the topic of the human factors implications of the social interaction with technology is getting more focus.
Dr. Rogers has been experimenting with a large robot called the PR2, made by Willow Garage, a robotics company in Palo Alto, Calif., which can fetch and administer medicine, a seemingly simple act that demands a great deal of trust between man and machine.
“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.
In a more ambitious use of technology, NPR is reporting that researchers are using computer-generated avatars as interviewers to detect soldiers who are susceptible to suicide. Simultaneously, facial movement patterns of the interviewee are recorded:
“For each indicator,” Morency explains, “we will display three things.” First, the report will show the physical behavior of the person Ellie just interviewed, tallying how many times he or she smiled, for instance, and for how long. Then the report will show how much depressed people typically smile, and finally how much healthy people typically smile. Essentially it’s a visualization of the person’s behavior compared to a population of depressed and non-depressed people.
While this sounds like an interesting application, I have to agree with with one of its critics that:
“It strikes me as unlikely that face or voice will provide that information with such certainty,” he says.
At worst, it will flood the real therapist with a “big data”-type situation where there may be “signal” but way too much noise (see this article).
I had heard that the Tesla Model S (the luxury electric car) had a giant touch screen as one of the main interfaces for secondary car functions and always wondered what that might be like from a human factors/usability perspective. Physical knobs and switches, unlike interface widgets, give a tactile sensation and do not change location on the dashboard.
This post is an interesting examination of the unique dashboard:
Think about a car’s dashboard for a second. It’s populated with analog controls: dials, knobs, and levers, all of which control some car subsystem such as temperature, audio, or navigation. These analog dials, while old, have two features: tactility and physical analogy. Respectively, this means you can feel for a control, and you have an intuition for how the control’s mechanical action affects your car (eg: counterclockwise on AC increases temperature). These small functions provide a very, very important feature: they allow the driver to keep his or her eyes on the road.
Except for a the privileged few that have extraordinary kinesthetic sense of where our hands are, the Model S’s control scheme is an accident waiting to happen. Hell, most of us can barely type with two hands on an iPhone. Now a Model S driver has to manage all car subsystems on a touchscreen with one hand while driving.
The solution, however, is may not be heads-up displays or augmented reality, as the author suggests (citing the HUD in the BMW).
While those displays allow the eye to remain on the road it’s always in the way–a persistent distraction. Also, paying attention to the HUD means your attention will not be on the road–and what doesn’t get paid attention to doesn’t exist:
Anne and I are big proponents of making sure the world knows what human factors is all about (hence the blog). Both of us were recently interviewed separately about human factors in general as well as our research areas.
The tone is very general and may give lay people a good sense of the breadth of human factors. Plus, you can hear how we sound!
First, Anne was just interviewed for the radio show “Radio In Vivo“.
Late last year, I was interviewed about human factors and my research on the local public radio program Your Day:
What does pop music visualization and neural imaging techniques have in common? Keep reading…You may have already seen this (i’m a little late) but have you ever wanted your favorite song to last forever? Enter “The Infinite Jukebox“.
You upload your favorite MP3 (or select among recent uploads) and the site will analyze and parse the beats. When you hit play it will smoothly jump to another part of the song that sounds similar so there is no end. That alone is cool, but the visualization of the process of playing and more importantly jumping to another section is surprisingly effective. When a possible beat intersection is reached, an arc spans the circle and (randomly) jumps or stays.
The effect works best for some songs and not others. You can get a nice at-a-glance view of the global organization of the song (highly locally repetitive like Daft Punk) or more globally repetitive (like a typical highly structured pop song):
It is probably by design that these diagrams look just like connectomes that map the neural pathways in the brain:
Story in the Washington Post about the impending demise of the computer mouse in favor of touch screens:
“Most children here have never seen a computer mouse,” said Hannah Tenpas, 24, a kindergarten teacher at San Antonio.
“The popularity of iPads and other tablets is changing how society interacts with information,” said Aniket Kittur, an assistant professor at the Human-Computer Interaction Institute at Carnegie Mellon University. “. . . Direct manipulation with our fingers, rather than mediated through a keyboard/mouse, is intuitive and easy for children to grasp.”
I realize the media needs a strong narrative to make an interesting story but the mouse is nowhere near dead. The story is more complicated and completely depends on the task. There are certain applications where the precise pointing afforded by mice are just too cumbersome with touch screens.
I recently published a study (conducted last year) on automation trust and dependence. In that study, we pseudo-wizard-of-oz’ed a smartphone app that would help diabetics manage their condition.
We had to fake it because there was no such app and it would be to onerous to program it (and we weren’t necessarily interested in the app, just a form of advanced, non-existent automation).
Now, that app is real. I had nothing to do with it but there are now apps that can help diabetics manage their condition. This NYT article discusses the complex area of healthcare apps:
Smartphone apps already fill the roles of television remotes, bike speedometers and flashlights. Soon they may also act as medical devices, helping patients monitor their heart rate or manage their diabetes, and be paid for by insurance.
The idea of medically prescribed apps excites some people in the health care industry, who see them as a starting point for even more sophisticated applications that might otherwise never be built. But first, a range of issues — around vetting, paying for and monitoring the proper use of such apps — needs to be worked out.
The focus of the article is on regulatory hurdles while our focus (in the paper) was how potential patients might accept and react to advice given by a smartphone app.