This clip of Fox News’ new studio has been tearing up the internet. But what caught my eye was the touchscreen lag and general unresponsiveness/accidental touches of the users in the background (see image at top; video here). Starting at the 10 second mark, note the user on the right.
I recently came across two ways in which users can interact with 3D objects. The first is Elon Musk manipulating a rocket model using gestures (via Universe Today). The second is a very cool way to create 3D models from 2D images (via Kottke.org).
It’s summer and we (along with some of you) are taking a break. But here’s a list of interesting usability/HF-related things that have crossed my path:
After much complaining, Ford is bringing back physical knobs in their MyTouch in-car controls. Anne and I worked on some research (PDF) in our past lives as graduate students that directly compared touch-only interfaces to knob-based interfaces so it’s nice to see it is still a major issue; if only Ford read our 9 year old paper 🙂
Trucks driving under very low bridges is such a large problem in Australia that they are deploying a really novel and clever warning system. A waterfall that projects a sign that’s hard to miss!
Coincidentally, the topic of social/human-technology interaction is in the news quite a bit today. I’m pleased that the topic of the human factors implications of the social interaction with technology is getting more focus.
Dr. Rogers has been experimenting with a large robot called the PR2, made by Willow Garage, a robotics company in Palo Alto, Calif., which can fetch and administer medicine, a seemingly simple act that demands a great deal of trust between man and machine.
“We are social beings, and we do develop social types of relationships with lots of things,” she said. “Think about the GPS in your car, you talk to it and it talks to you.” Dr. Rogers noted that people developed connections with their Roomba, the vacuum robot, by giving the machines names and buying costumes for them. “This isn’t a bad thing, it’s just what we do,” she said.
In a more ambitious use of technology, NPR is reporting that researchers are using computer-generated avatars as interviewers to detect soldiers who are susceptible to suicide. Simultaneously, facial movement patterns of the interviewee are recorded:
“For each indicator,” Morency explains, “we will display three things.” First, the report will show the physical behavior of the person Ellie just interviewed, tallying how many times he or she smiled, for instance, and for how long. Then the report will show how much depressed people typically smile, and finally how much healthy people typically smile. Essentially it’s a visualization of the person’s behavior compared to a population of depressed and non-depressed people.
While this sounds like an interesting application, I have to agree with with one of its critics that:
“It strikes me as unlikely that face or voice will provide that information with such certainty,” he says.
At worst, it will flood the real therapist with a “big data”-type situation where there may be “signal” but way too much noise (see this article).
Earlier this week the United States Department of Transportation released guidelines for automakers designed to reduce the distractibility of in-vehicle technologies (e.g., navigation systems). :
The guidelines include recommendations to limit the time a driver must take his eyes off the road to perform any task to two seconds at a time and twelve seconds total.
The recommendations outlined in the guidelines are consistent with the findings of a new NHTSA naturalistic driving study, The Impact of Hand-Held and Hands-Free Cell Phone Use on Driving Performance and Safety Critical Event Risk. The study showed that visual-manual tasks associated with hand-held phones and other portable devices increased the risk of getting into a crash by three times. [emphasis added]
But a new study (I have not read the paper yet) seems to show that even when you take away the “manual” aspect through voice input, the danger is not mitigated:
The study by the Texas Transportation Institute at Texas A&M University was the first to compare voice-to-text and traditional texting on a handheld device in an actual driving environment.
“In each case, drivers took about twice as long to react as they did when they weren’t texting,” Christine Yager, who headed the study, told Reuters. “Eye contact to the roadway also decreased, no matter which texting method was used.”
I had heard that the Tesla Model S (the luxury electric car) had a giant touch screen as one of the main interfaces for secondary car functions and always wondered what that might be like from a human factors/usability perspective. Physical knobs and switches, unlike interface widgets, give a tactile sensation and do not change location on the dashboard.
This post is an interesting examination of the unique dashboard:
Think about a car’s dashboard for a second. It’s populated with analog controls: dials, knobs, and levers, all of which control some car subsystem such as temperature, audio, or navigation. These analog dials, while old, have two features: tactility and physical analogy. Respectively, this means you can feel for a control, and you have an intuition for how the control’s mechanical action affects your car (eg: counterclockwise on AC increases temperature). These small functions provide a very, very important feature: they allow the driver to keep his or her eyes on the road.
Except for a the privileged few that have extraordinary kinesthetic sense of where our hands are, the Model S’s control scheme is an accident waiting to happen. Hell, most of us can barely type with two hands on an iPhone. Now a Model S driver has to manage all car subsystems on a touchscreen with one hand while driving.
The solution, however, is may not be heads-up displays or augmented reality, as the author suggests (citing the HUD in the BMW).
While those displays allow the eye to remain on the road it’s always in the way–a persistent distraction. Also, paying attention to the HUD means your attention will not be on the road–and what doesn’t get paid attention to doesn’t exist:
You may have heard the news that Google Reader, probably the most popular RSS reader on the web, is shutting down in a few months. Feedburner, also run by Google, is the service we’ve been using to distribute our RSS feed for readers who use Google Reader or who prefer email subscriptions to our blog. Unfortunately, I have a bad feeling that Feedburner will shut down soon as well.
No fear! You can still get email subscriptions to our blog by entering your email in the right-hand column textfield. If you use a feedreader, you may also want to update your link to us (also on the right side). The redirection should be automatic but you never know with automation!
The new feed link is: http://humanfactorsblog.org/feed/
My previouspostson using the iPad have become some of the most popular posts on this blog. So I thought I would give you an update on my evolving use of the iPad.
My history of use of the iPad started with great skepticism, moved into curious and active experimentation, and has settled into routine usage. Now, it’s an integrated part of my work and play. I’ve even done what was once unthinkable: nearly wrote a entire manuscript on the iPad without a hardware keyboard! (read on).
With great skepticism I got the original iPad a few months after it was released in 2010. While I could see the theoretical benefits of such a lightweight device, there was not yet much software that was specialized to do any work. In terms of usage, there were probably days that I did not use the iPad. It was primarily relegated to recreational web surfing or curious novelty.
After the release of the iPad 2, however, my usage increased dramatically. The reduction in weight and size, as well as the release of high quality productivity software meant that I not only carried it along with my then-laptop (Fujitsu P1620 ultraportable tablet), I could start to envision how I might start replacing my laptop. Usage was probably split 20 (iPad)/80 (laptop) in terms of mobile computing. It also helped that it was at this time that I switched my desktop computer and laptop to Mac. This made it much more seamless to use Keynote and Pages as replacements for Powerpoint and Word. I’ve kicked Powerpoint but I can’t yet kick Word to the curb.
The iPad 3 again increased usage mainly because of the high resolution display and dramatic speed increase made everything better, especially reading PDFs.
Now, I have an iPad mini and all the software that I’ve mentioned in previous posts are still usable but the form factor has now truly made it even more my primary mobile device of choice over the laptop. The effects of an always-on, super-ultra lightweight device seems to encourage frequent use in places where even a laptop is clunky (e.g., in bed, passenger in a car). I’m currently working on a manuscript and I would estimate that I’ve written more than 50% of it on the iPad mini (using the software keyboard and Pages). Probably another 10% on the iPhone (reading what I wrote, light editing) and the rest on the desktop or laptop computer.
Keynote is an especially capable presentation app. I’ve worked on full presentations created on the iPad (but presented on a laptop). They are whisked silently through the cloud and are on my laptop/desktop waiting for me.
But there are other things that are making the iPad work especially well for me. One feature that isn’t discussed a great deal in reviews is iCloud. iCloud, in contrast to Dropbox, invisibly keeps my Keynote (class lectures, professional presentations) and Pages (manuscripts) in sync on all my devices (desktop, laptop, iPad mini, and iPhone). I still use Dropbox but iCloud is simpler model with less thinking about spatial file organization (the file is just in the app). I still use Dropbox but treat it like an archive; a folder with many levels of folders. While I treat iCloud as an active area for current work, a work space. iCloud = short term memory, dropbox = long term memory. This setup works quite well for me.
Uses will be different for different people but for me (someone who values portability above all else and is a tinkerer) the Mini is a winner (it replaced my iPad 3). I also did not set unrealistic expectations of the device which may be why I’m so surprised how much of my daily computing can be addressed with such a relatively low-powered device. The size/weight of the Mini simply overwhelms any other benefit of the larger iPads. When I travel, I am now more likely to be carrying just the iPad (with no laptop unless I know i’ll need to program or do statistical analysis). In the end, it allows me to do a small amount of things in more places than at my desk.
To conclude, my most frequently used apps lately are:
Keynote (lecture and presentation creation & editing)
Keynote and Papers are truly exceptional apps that have nearly the full functionality of their desktop counterparts without replicating the same interaction style (i.e., they are optimized for tablets). I actually prefer doing lit searches in the iOS version of papers than using the desktop version!
This list is short because everything else is for fun!
Paul M. Fitts is widely regarded as the father of human factors. He gets mentioned a lot in HF texts because of his (still influential) law. In more modern times, Donald Norman gets a lot of recognition as the author of the Design of Everyday Things (mentioned in my post below) which introduced the idea of psychology and human factors to a more mainstream audience. However, someone who never gets mentioned (in my 12 years of education i’ve seen him mentioned once) was John E. Karlin who recently passed away.
By all accounts a modest man despite his variegated accomplishments (he had a doctorate in mathematical psychology, was trained in electrical engineering and had been a professional violinist), Mr. Karlin, who died on Jan. 28, at 94, was virtually unknown to the general public.
He is still relatively unknown to HF only because he rarely published his results; instead, he worked to solve problems in industry using the scientific method that all psychologists use.
“He was the one who introduced the notion that behavioral sciences could answer some questions about telephone design,” Ed Israelski, an engineer who worked under Mr. Karlin at Bell Labs in the 1970s, said in a telephone interview on Wednesday.
The NYT recently posted an obit detailing his contributions including such fundamental ones such as the telephone numeric layout (different from calculator layout):
Putting “1-2-3” on the pad’s top row instead of the bottom (the configuration used, then as now, on adding machines and calculators) was also born of Mr. Karlin’s group: they found it made for more accurate dialing.
The piece is very well written and I’m a little surprised that the author actually seems to understand HF and how it’s unique from other things (emphasis added):
It is not so much that Mr. Karlin trained midcentury Americans how to use the telephone. It is, rather, that by studying the psychological capabilities and limitations of ordinary people, he trained the telephone, then a rapidly proliferating but still fairly novel technology, to assume optimal form for use by midcentury Americans.
(NYT: great article but you hyphenated human factors in the 10th paragraph)