A recently released report, done in March 2013, reveals the process of creating Healthcare.gov. Hindsight is always 20/20, but we’ve also worked hard to establish best practices for considering both engineering and the user in software development. These contributions need to be valued, especially for large scale projects. After looking through the slides, one thing I note is that even this improved approach barely mentions the end users of the website. There is one slide that states “Identify consumer paths; review and modify vignettes.” The two examples of this are users who have more or less complex needs when signing up for insurance. I don’t see any mention of involving actual users prior to release.
Consultants noted there was no clear leader in charge of this project, which we now know contributed to its disastrous release. And there was no “end-to-end testing” of its full implementation, something we now know never happened.
Some of this may fall on us, for not being convincing enough that human factors methods are worth the investment. How much would the public be willing to pay for a solid usability team to work with the website developers?
In the newest post, Dr. Jeff Lawley discusses the usability of a DSM Reference app from Kitty CAT Psych. For those who didn’t take intro psych in college, the DSM is the Diagnostic and Statistical Manual, which classifies symptoms into disorders. It’s interesting to read an expert take on this app – he considers attributes I would not have thought of, such as whether the app retains information (privacy issues).
As Dr. Lawley notes on his “about” page, there are few apps designed for mental health professionals and even fewer evaluations of these apps. Hopefully his blog can fill that niche and inspire designers to create more mobile tools for these professionals.
I recently published a study (conducted last year) on automation trust and dependence. In that study, we pseudo-wizard-of-oz’ed a smartphone app that would help diabetics manage their condition.
We had to fake it because there was no such app and it would be to onerous to program it (and we weren’t necessarily interested in the app, just a form of advanced, non-existent automation).
Now, that app is real. I had nothing to do with it but there are now apps that can help diabetics manage their condition. This NYT article discusses the complex area of healthcare apps:
Smartphone apps already fill the roles of television remotes, bike speedometers and flashlights. Soon they may also act as medical devices, helping patients monitor their heart rate or manage their diabetes, and be paid for by insurance.
The idea of medically prescribed apps excites some people in the health care industry, who see them as a starting point for even more sophisticated applications that might otherwise never be built. But first, a range of issues — around vetting, paying for and monitoring the proper use of such apps — needs to be worked out.
The focus of the article is on regulatory hurdles while our focus (in the paper) was how potential patients might accept and react to advice given by a smartphone app.
This humorous NYT article discusses the foibles of auto-correct on computers and phones. Auto-correct, a more advanced type of the old spell checker, is a type of automation. We’ve discussed automation many times on this blog.
But auto-correct is unique in that it’s probably one of the most frequent touchpoints between humans and automation.
The article nicely covers, in lay language, many of the concepts of automation:
Out of the loop syndrome:
Who’s the boss of our fingers? Cyberspace is awash with outrage. Even if hardly anyone knows exactly how it works or where it is, Autocorrect is felt to be haunting our cellphones or watching from the cloud.
We are collectively peeved. People blast Autocorrect for mangling their intentions. And they blast Autocorrect for failing to un-mangle them.
I try to type “geocentric” and discover that I have typed “egocentric”; is Autocorrect making a sort of cosmic joke? I want to address my tweeps (a made-up word, admittedly, but that’s what people do). No: I get “twerps.” Some pairings seem far apart in the lexicographical space. “Cuticles” becomes “citified.” “Catalogues” turns to “fatalities” and “Iditarod” to “radiator.” What is the logic?
One more thing to worry about: the better Autocorrect gets, the more we will come to rely on it. It’s happening already. People who yesterday unlearned arithmetic will soon forget how to spell. One by one we are outsourcing our mental functions to the global prosthetic brain.
Humorously, even anthropomorphism of automation (attributing human-like characteristics to it, even unintentially)! (my research area):
Peter Sagal, the host of NPR’s “Wait Wait … Don’t Tell Me!” complains via Twitter: “Autocorrect changed ‘Fritos’ to ‘frites.’ Autocorrect is effete. Pass it on.”
It’s election season which means more opportunities to point, laugh, and cry at the state of voting usability. The first is sent in by Kim W. As part of an NPR story, the reporter dug up a sample ballot. Pretty overwhelming and confusing (“vote for not more than one”??); makes me long for electronic voting.
Next, Ford is sending out a software update to their popular MyTouch car telematics system. The following NYT article is excellent in highlighting the importance of not only basic usability but that “user experience” is just as important as technical capability/specs. The article lists a variety of usability quirks that should have been caught in user testing (e.g., “a touch-sensitive area under the touch screen that activates the hazard lights has been replaced with a mechanical button, because Ford learned that drivers were inadvertently turning on the hazard lights as they rested their hand while waiting for the system to respond.”).
I am being facetious when I point an laugh but seriously, many of these issues could have been caught early with basic, relatively cheap, simple user testing.
“I think they were too willing to rush something out because of the flashiness of it rather than the functionality,” said Michael Hiner, a former stock-car racing crew chief in Akron, Ohio, who bought a Ford Edge Limited last year largely because he and his wife were intrigued by MyFord Touch.
Now Ford has issued a major upgrade that redesigns much of what customers see on the screen and tries to resolve complaints about the system crashing or rebooting while the vehicle is being driven. Ford said on Monday that the upgrade made the touch screens respond to commands more quickly, improved voice recognition capabilities and simplified a design that some say had the potential to create more distractions for drivers who tried to use it on the road. Fonts and buttons on the screen have been enlarged, and the layouts of more than 1,000 screens have been revamped.
Here is a link to some neat new research being done by my colleagues at NCSU. It’s about the development of a tool that instantly changes the look of software code as it’s being developed, allowing for different ways to investigate bugs and features, but without changing the code in any way that might introduce errors. Dr. Emerson Murphy-Hill developed the interface for this “refactoring” of code and published on it this past semester.
“The researchers designed the marking menus so that the refactoring tools are laid out in a way that makes sense to programmers. For example, tools that have opposite functions appear opposite each other in the marking menu. And tools that have similar functions in different contexts will appear in the same place on their respective marking menus.
Early testing shows that programmers were able to grasp the marking menu process quickly, and the layout of the tools within the menus was intuitive.”
“I wasn’t trying to make a computer interface, I was just trying to make a drum,” Buxton tells NPR’s Robert Siegel. “Did I envision what was going to happen today, that it would be in everybody’s pocket — in their smartphone? Absolutely not. Did we realize that things were going to be different, that you could do things that we never imagined? … Absolutely.”
Today, Buxton is known as a pioneer in human-computer interaction, a field of computer science that has seen a spike in consumer demand thanks to a new, seemingly ubiquitous technology: Touch.”
“Turkle says that’s because touch-screen devices appeal to a sentiment that pretty much everyone can relate to: the desire to be a kid again.
“[The] fantasy of using your body to control the virtual is a child’s fantasy of their body being connected to the world,” Turkle says. “That’s the child’s earliest experience of the world and it kind of gets broken up by the reality that you’re separate from the world. And what these phones do is bring back that fantasy in the most primitive way.”
And Turkle warns that living in that fantasy world could mean missing out on the real world around you.”
If you’re with me so far, maybe I can nudge you one step further. Look down at your hands. Are they attached to anything? Yes — you’ve got arms! And shoulders, and a torso, and legs, and feet! And they all move!
Any dancer or doctor knows full well what an incredibly expressive device your body is. 300 joints! 600 muscles! Hundreds of degrees of freedom!
The next time you make breakfast, pay attention to the exquisitely intricate choreography of opening cupboards and pouring the milk — notice how your limbs move in space, how effortlessly you use your weight and balance. The only reason your mind doesn’t explode every morning from the sheer awesomeness of your balletic achievement is that everyone else in the world can do this as well.
With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?
Drs. Kelly Caine (of guest post fame) and Dennis Morrison will be presenting on human factors considerations for the design and use of electronic health records. Audience participation is welcome as they discuss this important topic. See abstract below.
In this conversation hour we will discuss the use of electronic health records in clinical practice. Specifically, we will focus on how, when designed using human factors methods, electronic health records may be used to support evidence based practice in clinical settings. We will begin by giving a brief overview of the current state of electronic health records in use in behavioral health settings, as well as outline the potential future uses of such records. Next, we will provide an opportunity for the audience members to ask questions, thus allowing members to guide the discussion to the issues most relevant to them. At the conclusion of the session, participants will have a broader understanding of the role of electronic health records in clinical practice as well as a deeper understanding of the specific issues they face in their practice. In addition, we hope to use this conversation hour as a starting point to generate additional discussions and collaborations on the use of electronic health records in clinical practice, potentially resulting in an agenda for future research in the area of electronic health records in clinical behavioral health practice.
Kelly Caine is the Principal Reserach Scientist in the Center for Law, Ethics, and Applied Research (CLEAR) Health Information.