Category Archives: hci

Thoughtful and Fun Interfaces in the Reykjavik City Museum

I stopped over in Iceland on the way to a conference and popped in to the Reykjavik City Museum, not knowing what I’d find. I love the idea of technology in a museum, but I’m usually disappointed. Either the concepts are bad, the technology is silly (press a button, light some text), or it just doesn’t work, beaten into submission by armies of 4-year-olds.

Not at the Settlement Exhibit in Reykjavik. There are two unique interfaces I want to cover, but I’ll start at the beginning with a more typical touchscreen that controlled a larger wall display. As you enter the museum, there are multiple stations for reading pages of the Sagas. These are the stories of their history, from the 9th to 11th centuries, and beautifully illustrated.
njals_saga_miniature
They have been scanned, so you can browse the pages (with translations) and not damage them. I didn’t have all day to spend there, but after starting some of the Sagas, I wished I had.

Further in you see the reason for the location: the excavation of the oldest known structure in Iceland, a longhouse, is in the museum! Around it are typical displays with text and audio, explaining the structure and what life was like at that time.

Then I moved into a smaller dark room with an attractive lit podium (see video below). You could touch it, and it controlled the large display on the wall. The display showed the longhouse as a 3-D virtual reconstruction. As you moved your finger around the circles on the podium, the camera rotated so you could get a good look at all parts of the longhouse. As you moved between circles, a short audio would play to introduce you to the next section. Each circle controlled the longhouse display, but the closer to the center the more “inside” the structure you can see. Fortunately, I found someone else made a better video of the interaction than I did:

The last display was simple, but took planning and thought. Near the exit was a large table display of the longhouse. It was also a touch interface, where you could put your hand on the table to activate information about how parts of the house were used. Think of the challenges: when I was there, it was surrounded by 10 people, all touching it at once. We were all looking for information in different languages. It has to be low enough for everyone to see, but not so low it’s hard to touch. Overall, they did a great job.

Be sure to do a stopover if you cross the Atlantic!

Both videos come from Alex Martire on YouTube.

Treemap sighting in the wild: U.S. Budget proposal

imageI get pretty excited when I see my favorite infovis being used: The Treemap

Just released today – the proposed U.S. budget as a treemap!

So, how well did this visualization work for its intended purpose:

  • Points awarded for using a treemap – it makes it so easy to see how massive social security and healthcare are.
  • Points deducted for the cluttered overlay text in the Transportation section.
  • Points deducted for making the areas clickable, but not actually providing more information beyond a platitude (“Military Personnel: When it comes to our service members and their families, America stands united in support. The budget helps ensure that those who serve our country receive all the support and opportunities they’ve earned and deserve.”)
  • Points deducted for making me click a link to “learn more” from a YouTube video of the entire State of the Union address when I could be learning more with a deeper treemap.

I’d like to see more of the blocks broken down into the components they fund, making it as informative and transparent as my go-to example of a treemap: the stock market. My second favorite treemap is a program that will treemap your harddrive, making it easy to see where those giant spacehogging files are hiding, deep in directories you forgot were there. I treemapped my lab server with it as we ran out of space and found giant video files about 10 directories down in an unlikely spot that were eating up our GBs.

Perhaps we could have a treemap that lets us change things in the budget to see how we would make it look, like the American Public Media interactive “Budget Hero” game from a few years ago (now defunct or I would link it)? I learned a LOT about what could budge and what couldn’t budge in the budget from that game.

*All the points deducted are far outweighed by my support of the treemap being used in the first place! Brilliant!

Radio interview with Rich

Our own Rich Pak was interviewed by the Clemson radio show “Your Day.”

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

They cover everything from the birth of human factors psychology to the design of prospective memory aids for older adults. Enjoy!

Worst Mobile Interface Ever

I was reading articles the other day and came across a site that, as many do, reformatted for my phone. Almost all reformatted-for-mobile sites are terrible, but this one is my favorite.
photo
You cannot scroll through the 21 page article by moving your finger up and down, as would happen on a website. The only way to change pages is via the horizontal slider at the bottom. Good luck trying to move it so slightly it only goes forward one page! And yes, moving the slider left and right does move the page up and down.

Usability process not used for ACA website

slideA recently released report, done in March 2013, reveals the process of creating Healthcare.gov. Hindsight is always 20/20, but we’ve also worked hard to establish best practices for considering both engineering and the user in software development. These contributions need to be valued, especially for large scale projects. After looking through the slides, one thing I note is that even this improved approach barely mentions the end users of the website. There is one slide that states “Identify consumer paths; review and modify vignettes.” The two examples of this are users who have more or less complex needs when signing up for insurance. I don’t see any mention of involving actual users prior to release.

The NPR write-up states:

Consultants noted there was no clear leader in charge of this project, which we now know contributed to its disastrous release. And there was no “end-to-end testing” of its full implementation, something we now know never happened.

Some of this may fall on us, for not being convincing enough that human factors methods are worth the investment. How much would the public be willing to pay for a solid usability team to work with the website developers?

App Usability Evaluations for the Mental Health Field

We’ve posted before on usability evaluations of iPads and apps for academics (e.g.,here, and here), but today I’d like to point to a blog dedicated to evaluating apps for mental health professionals.

In the newest post, Dr. Jeff Lawley discusses the usability of a DSM Reference app from Kitty CAT Psych. For those who didn’t take intro psych in college, the DSM is the Diagnostic and Statistical Manual, which classifies symptoms into disorders. It’s interesting to read an expert take on this app – he considers attributes I would not have thought of, such as whether the app retains information (privacy issues).

As Dr. Lawley notes on his “about” page, there are few apps designed for mental health professionals and even fewer evaluations of these apps. Hopefully his blog can fill that niche and inspire designers to create more mobile tools for these professionals.

Prescription Smartphone Apps

I recently published a study (conducted last year) on automation trust and dependence. In that study, we pseudo-wizard-of-oz’ed a smartphone app that would help diabetics manage their condition.

We had to fake it because there was no such app and it would be to onerous to program it (and we weren’t necessarily interested in the app, just a form of advanced, non-existent automation).

Now, that app is real.  I had nothing to do with it but there are now apps that can help diabetics manage their condition.  This NYT article discusses the complex area of healthcare apps:

Smartphone apps already fill the roles of television remotes, bike speedometers and flashlights. Soon they may also act as medical devices, helping patients monitor their heart rate or manage their diabetes, and be paid for by insurance.

The idea of medically prescribed apps excites some people in the health care industry, who see them as a starting point for even more sophisticated applications that might otherwise never be built. But first, a range of issues — around vetting, paying for and monitoring the proper use of such apps — needs to be worked out.

The focus of the article is on regulatory hurdles while our focus (in the paper) was how potential patients might accept and react to advice given by a smartphone app.

(photo: Ozier Muhammad/The New York Times)

Everyday Automation: Auto-correct

This humorous NYT article discusses the foibles of auto-correct on computers and phones. Auto-correct, a more advanced type of the old spell checker, is a type of automation. We’ve discussed automation many times on this blog.

But auto-correct is unique in that it’s probably one of the most frequent touchpoints between humans and automation.

The article nicely covers, in lay language, many of the concepts of automation:

Out of the loop syndrome:

Who’s the boss of our fingers? Cyberspace is awash with outrage. Even if hardly anyone knows exactly how it works or where it is, Autocorrect is felt to be haunting our cellphones or watching from the cloud.

Trust:

We are collectively peeved. People blast Autocorrect for mangling their intentions. And they blast Autocorrect for failing to un-mangle them.

I try to type “geocentric” and discover that I have typed “egocentric”; is Autocorrect making a sort of cosmic joke? I want to address my tweeps (a made-up word, admittedly, but that’s what people do). No: I get “twerps.” Some pairings seem far apart in the lexicographical space. “Cuticles” becomes “citified.” “Catalogues” turns to “fatalities” and “Iditarod” to “radiator.” What is the logic?

Reliance:

One more thing to worry about: the better Autocorrect gets, the more we will come to rely on it. It’s happening already. People who yesterday unlearned arithmetic will soon forget how to spell. One by one we are outsourcing our mental functions to the global prosthetic brain.

Humorously, even anthropomorphism of automation (attributing human-like characteristics to it, even unintentially)! (my research area):

Peter Sagal, the host of NPR’s “Wait Wait … Don’t Tell Me!” complains via Twitter: “Autocorrect changed ‘Fritos’ to ‘frites.’ Autocorrect is effete. Pass it on.”

(photo credit el frijole @flickr)

Introducing the principle of graceful error recovery to state government

A North Carolina State Representative just accidentally overrode a veto on “fracking” due to being tired and pressing the wrong button during the vote.

Apparently, they aren’t allowed to change their votes if it would alter the overall outcome. So even though she realized it right when she pressed the button, the override stands.

From the article on WRAL:

 Carney characterized her vote as “very accidental.”

“It is late. Here we are rushing to make these kind of decisions this time of night,” she said.

Carney pointed out that she has voted against fracking in the past, and said she spent the day lobbying other Democrats to uphold the veto of Senate Bill 820.

“And then I push the green button,” she said.

Just after the vote, Carney’s voice could be heard on her microphone, saying “Oh my gosh. I pushed green.”

Usability Follies in the News

It’s election season which means more opportunities to point, laugh, and cry at the state of voting usability.  The first is sent in by Kim W.  As part of an NPR story, the reporter dug up a sample ballot. Pretty overwhelming and confusing (“vote for not more than one”??); makes me long for electronic voting.

Next, Ford is sending out a software update to their popular MyTouch car telematics system. The following NYT article is excellent in highlighting the importance of not only basic usability but that “user experience” is just as important as technical capability/specs.  The article lists a variety of usability quirks that should have been caught in user testing (e.g., “a touch-sensitive area under the touch screen that activates the hazard lights has been replaced with a mechanical button, because Ford learned that drivers were inadvertently turning on the hazard lights as they rested their hand while waiting for the system to respond.”).

My Touch (photo: NYT)

I am being facetious when I point an laugh but seriously, many of these issues could have been caught early with basic, relatively cheap, simple user testing.

“I think they were too willing to rush something out because of the flashiness of it rather than the functionality,” said Michael Hiner, a former stock-car racing crew chief in Akron, Ohio, who bought a Ford Edge Limited last year largely because he and his wife were intrigued by MyFord Touch.

Now Ford has issued a major upgrade that redesigns much of what customers see on the screen and tries to resolve complaints about the system crashing or rebooting while the vehicle is being driven. Ford said on Monday that the upgrade made the touch screens respond to commands more quickly, improved voice recognition capabilities and simplified a design that some say had the potential to create more distractions for drivers who tried to use it on the road. Fonts and buttons on the screen have been enlarged, and the layouts of more than 1,000 screens have been revamped.