I recently published a study (conducted last year) on automation trust and dependence. In that study, we pseudo-wizard-of-oz’ed a smartphone app that would help diabetics manage their condition.
We had to fake it because there was no such app and it would be to onerous to program it (and we weren’t necessarily interested in the app, just a form of advanced, non-existent automation).
Now, that app is real. I had nothing to do with it but there are now apps that can help diabetics manage their condition. This NYT article discusses the complex area of healthcare apps:
Smartphone apps already fill the roles of television remotes, bike speedometers and flashlights. Soon they may also act as medical devices, helping patients monitor their heart rate or manage their diabetes, and be paid for by insurance.
The idea of medically prescribed apps excites some people in the health care industry, who see them as a starting point for even more sophisticated applications that might otherwise never be built. But first, a range of issues — around vetting, paying for and monitoring the proper use of such apps — needs to be worked out.
The focus of the article is on regulatory hurdles while our focus (in the paper) was how potential patients might accept and react to advice given by a smartphone app.
This humorous NYT article discusses the foibles of auto-correct on computers and phones. Auto-correct, a more advanced type of the old spell checker, is a type of automation. We’ve discussed automation many times on this blog.
But auto-correct is unique in that it’s probably one of the most frequent touchpoints between humans and automation.
The article nicely covers, in lay language, many of the concepts of automation:
Out of the loop syndrome:
Who’s the boss of our fingers? Cyberspace is awash with outrage. Even if hardly anyone knows exactly how it works or where it is, Autocorrect is felt to be haunting our cellphones or watching from the cloud.
We are collectively peeved. People blast Autocorrect for mangling their intentions. And they blast Autocorrect for failing to un-mangle them.
I try to type “geocentric” and discover that I have typed “egocentric”; is Autocorrect making a sort of cosmic joke? I want to address my tweeps (a made-up word, admittedly, but that’s what people do). No: I get “twerps.” Some pairings seem far apart in the lexicographical space. “Cuticles” becomes “citified.” “Catalogues” turns to “fatalities” and “Iditarod” to “radiator.” What is the logic?
One more thing to worry about: the better Autocorrect gets, the more we will come to rely on it. It’s happening already. People who yesterday unlearned arithmetic will soon forget how to spell. One by one we are outsourcing our mental functions to the global prosthetic brain.
Humorously, even anthropomorphism of automation (attributing human-like characteristics to it, even unintentially)! (my research area):
Peter Sagal, the host of NPR’s “Wait Wait … Don’t Tell Me!” complains via Twitter: “Autocorrect changed ‘Fritos’ to ‘frites.’ Autocorrect is effete. Pass it on.”
It’s election season which means more opportunities to point, laugh, and cry at the state of voting usability. The first is sent in by Kim W. As part of an NPR story, the reporter dug up a sample ballot. Pretty overwhelming and confusing (“vote for not more than one”??); makes me long for electronic voting.
Next, Ford is sending out a software update to their popular MyTouch car telematics system. The following NYT article is excellent in highlighting the importance of not only basic usability but that “user experience” is just as important as technical capability/specs. The article lists a variety of usability quirks that should have been caught in user testing (e.g., “a touch-sensitive area under the touch screen that activates the hazard lights has been replaced with a mechanical button, because Ford learned that drivers were inadvertently turning on the hazard lights as they rested their hand while waiting for the system to respond.”).
I am being facetious when I point an laugh but seriously, many of these issues could have been caught early with basic, relatively cheap, simple user testing.
“I think they were too willing to rush something out because of the flashiness of it rather than the functionality,” said Michael Hiner, a former stock-car racing crew chief in Akron, Ohio, who bought a Ford Edge Limited last year largely because he and his wife were intrigued by MyFord Touch.
Now Ford has issued a major upgrade that redesigns much of what customers see on the screen and tries to resolve complaints about the system crashing or rebooting while the vehicle is being driven. Ford said on Monday that the upgrade made the touch screens respond to commands more quickly, improved voice recognition capabilities and simplified a design that some say had the potential to create more distractions for drivers who tried to use it on the road. Fonts and buttons on the screen have been enlarged, and the layouts of more than 1,000 screens have been revamped.
Here is a link to some neat new research being done by my colleagues at NCSU. It’s about the development of a tool that instantly changes the look of software code as it’s being developed, allowing for different ways to investigate bugs and features, but without changing the code in any way that might introduce errors. Dr. Emerson Murphy-Hill developed the interface for this “refactoring” of code and published on it this past semester.
“The researchers designed the marking menus so that the refactoring tools are laid out in a way that makes sense to programmers. For example, tools that have opposite functions appear opposite each other in the marking menu. And tools that have similar functions in different contexts will appear in the same place on their respective marking menus.
Early testing shows that programmers were able to grasp the marking menu process quickly, and the layout of the tools within the menus was intuitive.”
“I wasn’t trying to make a computer interface, I was just trying to make a drum,” Buxton tells NPR’s Robert Siegel. “Did I envision what was going to happen today, that it would be in everybody’s pocket — in their smartphone? Absolutely not. Did we realize that things were going to be different, that you could do things that we never imagined? … Absolutely.”
Today, Buxton is known as a pioneer in human-computer interaction, a field of computer science that has seen a spike in consumer demand thanks to a new, seemingly ubiquitous technology: Touch.”
“Turkle says that’s because touch-screen devices appeal to a sentiment that pretty much everyone can relate to: the desire to be a kid again.
“[The] fantasy of using your body to control the virtual is a child’s fantasy of their body being connected to the world,” Turkle says. “That’s the child’s earliest experience of the world and it kind of gets broken up by the reality that you’re separate from the world. And what these phones do is bring back that fantasy in the most primitive way.”
And Turkle warns that living in that fantasy world could mean missing out on the real world around you.”
If you’re with me so far, maybe I can nudge you one step further. Look down at your hands. Are they attached to anything? Yes — you’ve got arms! And shoulders, and a torso, and legs, and feet! And they all move!
Any dancer or doctor knows full well what an incredibly expressive device your body is. 300 joints! 600 muscles! Hundreds of degrees of freedom!
The next time you make breakfast, pay attention to the exquisitely intricate choreography of opening cupboards and pouring the milk — notice how your limbs move in space, how effortlessly you use your weight and balance. The only reason your mind doesn’t explode every morning from the sheer awesomeness of your balletic achievement is that everyone else in the world can do this as well.
With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?
Drs. Kelly Caine (of guest post fame) and Dennis Morrison will be presenting on human factors considerations for the design and use of electronic health records. Audience participation is welcome as they discuss this important topic. See abstract below.
In this conversation hour we will discuss the use of electronic health records in clinical practice. Specifically, we will focus on how, when designed using human factors methods, electronic health records may be used to support evidence based practice in clinical settings. We will begin by giving a brief overview of the current state of electronic health records in use in behavioral health settings, as well as outline the potential future uses of such records. Next, we will provide an opportunity for the audience members to ask questions, thus allowing members to guide the discussion to the issues most relevant to them. At the conclusion of the session, participants will have a broader understanding of the role of electronic health records in clinical practice as well as a deeper understanding of the specific issues they face in their practice. In addition, we hope to use this conversation hour as a starting point to generate additional discussions and collaborations on the use of electronic health records in clinical practice, potentially resulting in an agenda for future research in the area of electronic health records in clinical behavioral health practice.
Kelly Caine is the Principal Reserach Scientist in the Center for Law, Ethics, and Applied Research (CLEAR) Health Information.
Information Foraging Theory is a theory of human-information interaction that aims to explain and predict how people will best shape themselves to their information environments, and how information environments can best be shaped to people. The approach involves a kind of reverse engineering in which the analyst asks (a) what is the nature of the task and information environments, (b) why is a given system a good solution to the problem, and (c) how is that “ideal” solution realized (approximated) by mechanism.
Typically, the key steps in developing a model of information foraging involve: (a) a rational analysis of the task and information environment (often drawing on optimal foraging theory from biology) and (b) a computational production system model of the cognitive structure of task. I will briefly review work on individual information seeking, and then focus on how this work is being expanded to studies of information production and sense-making in technology-mediated social systems such as wikis, social tagging, social network sites, and twitter.
In recent years, we have been extending our studies to deal with social interactions on the Web (e.g., wikis, tagging systems, twitter). This has lead to studies of how people assess source credibility (expertise, trustworthiness, bias ) and how user interfaces might affect such judgments.
Health Care Comes Home reviews the state of current knowledge and practice about many aspects of health care in residential settings and explores the short- and long-term effects of emerging trends and technologies. By evaluating existing systems, the book identifies design problems and imbalances between technological system demands and the capabilities of users. Health Care Comes Home recommends critical steps to improve health care in the home. The book’s recommendations cover the regulation of health care technologies, proper training and preparation for people who provide in-home care, and how existing housing can be modified and new accessible housing can be better designed for residential health care. The book also identifies knowledge gaps in the field and how these can be addressed through research and development initiatives.
Consumer Health Information Technology in the Home introduces designers and developers to the practical realities and complexities of managing health at home. It provides guidance and human factors design considerations that will help designers and developers create consumer health IT applications that are useful resources to achieve better health.