A Blind Person’s Interactions With Technology

uf3
Braille labels

The latest issue of Communications of the ACM has an interesting story on the unique issues in interface design and usability when the user is blind.  The author/researchers Kristen Shinohara and Josh Tenenberg conducted interviews to examine limitations in current accessibility technologies for the blind.

Showing off her BrailleNote, she said she prefers reading Braille, as opposed to listening to talking software, because it is quiet.

She also said that carrying around her awkwardly shaped labeler makes her feel self-conscious and expressed frustration when she is not acknowledged in casual social situations due to her blindness. A concrete design modification she suggested is to allow a Braille labeler to make print labels. A dual print labeler would allow her to create labels so she could better share mixed CDs she makes for friends.

A great pullout quote is that:

Simply replacing one interaction mode, such as the display of text on a screen with a functionally equivalent mode, as in speaking the text aloud, is not necessarily equivalent from the point of view of user experience.

Politics and Research: Driver Distraction

A hot item in the news is that research on multi-tasking while driving was suppressed back in 2002 because the NTSA was afraid of “antagonizing” Congress. An excerpt from the NYT article:

But such an ambitious study never happened. And the researchers’ agency, the National Highway Traffic Safety Administration, decided not to make public hundreds of pages of research and warnings about the use of phones by drivers — in part, officials say, because of concerns about angering Congress.

Not all the research went unpublished. The safety agency put on its Web site an annotated bibliography of more than 150 scientific articles that showed how a cellphone conversation while driving taxes the brain’s processing power. But the bibliography included only a list of the articles, not the one-page summaries of each one written by the researchers.

Chris Monk, who researched the bibliography for 18 months, said the exclusion of the summaries took the teeth out of the findings.

“It became almost laughable,” Mr. Monk said. “What they wound up finally publishing was a stripped-out summary.”

Reviewers of this literature take note: The summaries are quite good and can be found in the original NTSA report from pages 13 to 99.

Many news sources covered this topic:

The articles seem to imply that there was knowledge available to the NTSA but not to anyone else, certainly not congress. Mention of outside research was brief in the news articles. For example, one excerpt summarized:

The research mirrors other studies about the dangers of multitasking behind the wheel. Research shows that motorists talking on a phone are four times as likely to crash as other drivers, and are as likely to cause an accident as someone with a .08 blood alcohol content.

The links go to the start of powerpoint presentations included in the NTSA document rather than to actual research. There is nothing wrong with the powerpoints, though it would be nice if the HF researchers and professionals who have worked on this problem got direct credit for the knowledge they provided to the world.

I’m not sure why the work done by the NTSA would be the only work taken seriously by congress, but perhaps I’m not understanding correctly. HF researchers have known for years that a hands-free headset was not an inoculation against poor driving performance and that many in-car activities consume attentional resources.  David Strayer, (whose articles I use in my Intro to Human Factors class,)  has worked for years on the problem of driver distraction. His work alone, often with Frank Drews, provides 8 published articles on driver distraction between 2001 and 2003 when the NTSA report was written.  This is, of course, in addition to work occurring since the inception of the cell phone (these are just a few I cribbed from Strayer and Johnson’s references in their 2001 article):

  • Alm, H. & Nilsson, L. (1995). The effects of a mobile telephone task on driver behaviour in a car following situation. Accident Analysis and Prevention, 27, 707-715.
  • Briem, V. & Hedman, L.R. (1995). Behavioural effects of mobile telephone use during simulated driving. Ergonomics, 38, 2536-2562.
  • Redelmeier, D.A., & Tibshirani, J.J. (1997). Association between cellular-telephone calls and motor vehicle collisions. The New England Journal of Medicine, 336, 453-458.

My favorite comes from 1969(!):

  • Brown, I.D., Tickner, A.H., & Simmonds, D.C.V. (1969). Interference between concurrent tasks of driving and telephoning. Journal of Applied Psychology, 35, 419-424.

Consumer Reports weighed in on this today as well.

Ford’s Use of Personas in Design

This NYT article delves into the use of personas at Ford Motor Company.  The article is written to imply that the use of personas (or archetypes as they call it) is novel.  It also delves uncomfortably and unnecessarily into Jungian psychology (psychological archetypes, eh?).

But many designers and user experience people have been using personas for quite a while.  The article does not mention Alan Cooper who was pivotal in articulating the benefits of personas in design.  From the article:

ANTONELLA is an attractive 28-year old woman who lives in Rome. Her life is focused on friends and fun, clubbing and parties.

Antonella was the guiding personality for the Ford Verve, a design study that served as the basis for the latest-generation Fiesta. A character invented by Ford designers to help them imagine cars better tailored to their intended customers, she embodies a philosophy that guides the company’s design studios these days: to design the car, first design the driver.

Personas can contain varying levels of detail but some contain the character’s motivations and attitudes  (as evidenced above).  They should contain enough detail for the designer to answer the question, “Would Antonella like this feature?”, “How would she complete this task”?

The benefit other benefit of personas is that they help focus the design team’s efforts into a single user (who represents a class of users).  More detail on personas can be found on the Wikipedia entry or even better Alan Cooper’s book.

When Users Complain: Blackboard

There is a great article over at Inside Higher Ed. describing what happens when a company without evidence of a usability process finally asks its users for feedback.

At an open “listening session” with top executives of Blackboard here Wednesday at the company’s annual conference, college officials expressed frustration with many of the system’s fundamental characteristics. At times, the meeting seemed to turn into a communal gripe session, with complaints ranging from the system’s discussion forum application, to the improved — but still lacking — user support, to the training materials for faculty members. Participants’ concerns were often greeted with nods of agreement and outright applause from their peers as they spoke of their frustrations with the system.

“Every time we have a migration [to an updated version of Blackboard], we have new features to figure out. You should be providing us workable faculty materials with your product,” one commenter said amidst applause by those in the audience. “You put the burden on ourselves … and then create the documentation and then train. That’s why so many of us struggle to move forward to the next [version]. We are Blackboard on our campuses, and for us to be advocates, you have to give us the tools to be successful — training.” She emphasized that she would rather see more of a focus on fundamentals like training than updated versions of the software.

As a long time user of Blackboard (at two universities) I can speak about it in HF terms. One of the biggest usability problems with the system comes from mode errors.

  • There are multiple modes, each with their own set of sometimes overlapping (sometimes not) features.
  • You can perform 60% of a complex, time-consuming task in one mode, only to realize that it cannot be completed in that mode and you have to start over in another.
  • I can see how users would blame themselves, thinking “I KNEW I had to be in build mode to do that. I just didn’t remember to change from teach mode.”

Here is an example screen. The left screen shows “build” mode, with the sidebar options open. Once the user realizes the task can’t be completed in build mode and must be done in “teach” mode, he or she clicks “teach” and the left screen appears. (Screens are overlapped for this example only).

blackboard

Notice the similarity of the pages. No longer can you add a file because you are ‘teaching.’ Adding content to your course is not considered teaching. Last, the sidebar collapses when a mode is changed. Because the icons are not helpful, this means navigation in the new mode requires the extra click on the sidebar to open it back up before starting the task anew.

I could go on, but the amount of time and analysis I have put into Blackboard over the last six years would require a consulting fee from them. 🙂

(post image: http://www.flickr.com/photos/8078381@N03/3279725831/)

Augmented Reality for Everyone

There have been many recent examples of consumer friendly augmented reality applications for smart phone users.  I remember reading about augmented reality research over a decade ago (in an HCI class) and remembering how bulky, expensive, experimental, and out-of-reach it seemed back then.  The systems back then required head-mounted displays and were physically attached to cameras and large computers.  Now it is available for any iPhone or Android smartphone user.

The first example below overlays subway signage and directional arrows to help find your way around the NY subway.  This seems great for tourists who may not be regular users of the metro (wish I had this when I was in the Netherlands last month).

Speaking of the Netherlands, the second example is for Android phones and overlays information about bars, restaurants, and houses for sale in Amsterdam:

These are certainly impressive examples of augmented reality. But another fun and simple recent example is the ball tracker that was used by ESPN:

It is implied but one possible reason we like these (we as in “users”) is that augmented reality applications pre-integrate information for us (in the first two examples) reducing the need for us to do it ourselves (a working memory and time-intensive activity) or they keep information in sensory memory longer than is usually available (ball path) letting us see patterns that would otherwise be invisible.

Population Trends: Age

Perhaps you are like me, and always looking for great images to put in your presentations about why it’s important to consider aging in human factors work. Or perhaps you just like a good, creative visualization. Well, here you go on both counts.

This comes courtesy of Mark Thoma of the Economist’s View blog, created from census data. It shows the percentage of the population at different ages (colors) at different times (x-axis.)

agedistribution

I think it’s great how you can see the baby boomer wave move across the graphic!

Here is his quick comparision between 1950 and 2050. It’s a whole new world:

1950 2050

Time.com article on Anne’s research with Games & Aging

Our own Anne McLaughlin was featured in a recent article in Time.com.  Anne and her colleagues Jason Allaire (NCSU) and Maribeth Gandy (Georgia Tech) were recently awarded a $1.2 million grant from the National Science Foundation to study using games to moderate cognitive decline in older adults.

Their plan is to study what parts of games might help cognitive performance and then to create a new game based on these components.

There is, of course, no cure for memory loss, and no preventive vaccine. Yet a rapidly growing body of evidence suggests that certain behaviors may reliably slow the effects of age-related cognitive decline. Chief among them: eating right, exercising and engaging in social activity and mentally challenging tasks.

McLaughlin and Allaire’s new study will follow 270 seniors as they play the Wii game Boom Blox. Gameplay involves demolishing targets like a medieval castle or a space ship using an arsenal of weapons such as slingshots and cannonballs. While those particular skills may not seem transferable to off-screen life, McLaughlin says she and her colleagues chose Boom Blox specifically because it does require a wide range of real-world skills, including memory, special ability, reasoning and problem solving.  [ed: ‘special ability’ should be spatial ability]

Why Boom Blox?  Anne tells me that she:

“…actually chose the game after doing task analyses on many games, seeing what fit our profile, then showing those games to OA [older adults] in a focus group and getting “buy in” for what they said they would play.”

Below is an annotated screen shot of Boom Blox and an excerpt of the task analysis of the game and what abilities are required.

Annotated screen shot of Boom Blox
Annotated screen shot of Boom Blox

all
Task analysis (cognitive requirements of the game)

[Can Gaming Slow Mental Decline in the Elderly? at Time.com]

Is Older Adult Interest in the Wii Interface Just Hype?: or “I want to try bowling”

I’d like to share a conversation with my mother that occurred today. She is in her 60s, and although she uses a computer for communication, has never even played solitaire or shown any interest in a video game.

Nikki says:
The wee? Is a game that lets you think you are doing a sport?

Anne says:
Yes, you use the controller, that looks like a remote control, just like a tennis racquet or a bowling ball.

Nikki says:
Do you actually pretend you are bowling?

Anne says:
Yes.

Nikki says:
How do you see if you threw the ball correctly?

Anne says:
It shows on the screen.
It responds to how hard you “pretend” to bowl.

Nikki says:
Are you attached by the legs and hands?

Anne says:
No, you’re just holding what looks like a remote control.  No wires.

Nikki says:
And then the screen mimics your moves?

Anne says:
Yes.

Nikki says:
FANTASTIC

Nikki says:
I want to try bowling.

New Interface for Online Banking

There are many iPhone applications that integrate the phone camera with software in novel ways.  I came across this video demonstrating how it can be used to deposit checks electronically.

The interface demonstration starts at the 1 minute mark if you would like to skip the advertisement.*

*I’m not sure if it counts as an advertisement when most people aren’t allowed to bank there. You must have a connection to the military to use USAA (hence the aircraft carrier example in the video).

The Tactile Thinkpad: More Laptop Redesign from Lenovo

I posted earlier on the innovative data collection Lenovo did for a keyboard redesign. A new post on DesignMatters details the design and user testing of a new touch pad using tactile feedback.

Designers must often work within constraints induced by other portions of their product. In this case, the touchpad had to be flush with the hand rest of the laptop, meaning there was no way to signal the user when he or she was moving a finger on the pad versus the inactive borders.  The pad itself had to provide a tactile cue. From the post:

We studied a tremendous number of seemingly identical design variants of the dotted texture before we decided on the final version. Bumps varied by diameter, height, spacing, gloss, and even hardness.  Every sample was evaluated  by appearance and feel criteria. One test was to compare the surrounding palmrest texture to the pad samples to ensure that you could detect when your fingers moved beyond the pad boundries. We always did this with our eyes closed and then open. We also wanted to make certain the texture was pleasing to touch and look at. Many alternatives were rejected because they were too flashy looking,  felt like sandpaper, or just made people giggle. In case you are wondering , we never considered making the pad yellow.

bumpsbunch3sm

Sampling of prototype tactile samples

As the product got closer to release we were also able to test the texture with multiple users for extended periods of time. The feedback we gathered was very positive. They were able to detect the border easily and often commented that the subtle texture gave them a sense of precision as they moved their finger across the pad. The bumps provide indication of  distance travelled and speed of movement. We found this effect to be of particular interest with multitouch gesture input.

I assume the thinkpad has “scrollbars” in their touchpad on the right side and on along the bottom.  I wonder if the considered changing the texture for those areas so a user would know they could scroll. Of course, the scrollbars are only identified visually on most touchpads, and the user knows how to find them by moving a finger all the way to the border of the touchpad. With no raised border, users could still find a border by looking for the change between textured and smooth, and I’d be interested to watch how well they did this. A raised edge affords moving along it; it traps your finger into a straight line. I’d like to compare that to a texture change.