Automakers: Don’t skimp on the interface!

A very angry but insightful comment about the vehicle electronic interface of the 2011 Buick Regal from an automotive journalist:

Non touchscreen touchscreen: The GM navigation system and the graphics for it are designed with a touchscreen in mind — when entering in a destination, there is a recreation of a keyboard that allows you to punch in your letters and numbers. But, you can’t do that in the Regal.

So, Option 1: Use the clickable iDrive knob that falls more readily at hand. You can click the individual letter icons, but going through them takes FOREVER because you’re scanning one letter at a time across a keyboard icon. Audi and BMW both display the alphabet around a circle, which makes it quicker to program and easier to decipher.

Or, Option 2: Use the dash knob: This allows you to either rotate through the keyboard or move around it up, down and laterally using the multi-directional pad. Better than option 1, but the knob’s placement is less convenient.

Or, Option 3: Forget the knobs altogether and use the voice controls. This works, though it takes a very long time (the playback prompts don’t help) and for some reason, when I tried to use them, it didn’t ask me for an address number. Instead, I only had the option of going to some indiscriminate point on Flamingo Road.

He ends with this scathing comment:

Compared to our Acura TSX Wagon or departed Cadillac CTS, the Regal’s electronics interface seems like someone just didn’t try. When a brand is trying to convince people it deserves to be considered amongst luxury brands, it’s details like these that make a car stand above. The Genesis and Equus seem like (and are) luxury cars because Hyundai went all in.

http://blogs.insideline.com/roadtests/2011/02/2011-buick-regal-nonsensical-electronics-controls.html

Apple, UCD, and Innovation – A Guest Post by Travis Bowles

This guest post is in response to the article User-Led Innovation Can’t Create Breakthroughs: Just ask Apple and IKEA at fastcodesign.com

From the article:

One evening, well into the night, we asked some of our friends on the Apple design team about their view of user-centric design. Their answer? “It’s all bullshit and hot air created to sell consulting projects and to give insecure managers a false sense of security. At Apple, we don’t waste our time asking users, we build our brand through creating great products we believe people will love.”

I’d argue that someone at Apple noticed how Microsoft had been building tablets since 2002 and hadn’t quite gained traction. Apple tends to step in and refine the work of others, often picking the perfect moment in time when the capabilities of a technology and users’ willingness to accept a technology intersect. The question of how they choose those moments is hotly debated – if it was more apparent to the world, their competition wouldn’t always be following them into the market, often AFTER pioneering the first generation of the market Apple dominates (see Microsoft with tablets, Creative Labs with the iPod, Xerox with GUIs).

I believe there is a common misunderstanding that User Centered Design(UCD) is asking users what they need and building it. If that were UCD, then we’d just let marketing and sales departments design products with the feature lists provided by customers and, in many cases, that would be a sufficient source of information to drive an evolutionary product design process. However, I would argue that proper full-spectrum user centered design *leads* to revolutionary product designs. The problem lies in the assumption that user centered design is building what the user thinks he/she wants.

Jonathan Ive is fond of a quote from Henry Ford that I use in explaining the differences between customer feedback and user experience research – “If I had asked people what they needed, they would have said faster horses.” I think this sums up the Apple philosophy that they are creating things so new and cool that the future users wouldn’t even know what technologies were available, let alone be able to assemble them into new category of device. The mistake here is believing that the only tool available to the UCD practitioner is asking users “what should we build for you?”

1910 Model T Ford, Salt Lake City, Utah

What Ive ignores is that, although Henry Ford didn’t rely on potential customers to define his product, he did learn about their needs and try to accommodate them. The original Model Ts were designed to run on ethanol for the benefit of farmers who could make their own fuel from the land (as they did for their horses), and they were designed for the simple servicing by owners in the field (as they did their other farm equipment), in contrast to some more expensive competitors. He didn’t ask his users to design his product but he informed his designs by learning about their environment, goals, and needs.

On a smaller scale, I’ve seen failures of this sort during user testing, when some participants will offer direct design advice, proposing that you place this button here, add this feature there. A lot of researchers get frustrated and dismiss this sort of input, correctly asserting that the participant is not here to redesign the UI. I do, however, find follow-up questions on these design suggestions often produce interesting data points concerning user expectations, needs and even mental models of the system. I wonder sometimes if some designers and researchers overreact because they feel their value is undermined when they acknowledge any value in the ideas of potential users.

One last thought I have is that the new crop of development-centric, massively networked products presents new challenges to the value of UCD. Startups have always moved quickly, and they’ve always run the risk of losing a race to release a product if they spend too much time “polishing” their product before an initial release. As a result, user experiences and feature depth were usually poor to start with and improved over time as the user base increased. The major changes in user experience were made while the number of users forced to adjust was still small, and by the time wide scale adoption was realized, changes generally settled into enhancements and logical upgrades (largely speaking software here, but Consumer Electronics also fits).

However, recently, to be successful a product needs to become ubiquitous almost upon release. Between social networks and newly established cycles of technology obsolescence,* there is little time to build up a base of users to try the early versions of your product before widespread acceptance. One might assume this would motivate companies to work harder to use UCD to create good designs before that initial release, but this has not been the strategy applied by the biggest winners. Instead, I believe successful companies are setting out to provide one or a handful of killer features, often wrapped in a barely serviceable user experience, to as many people as quickly as possible. Rather than risk missing out on a key moment, they skip the needs gathering and early stage user research and take their best shot instead. If they are successful and widely adopted, the reasoning goes, they can go back and improve the experience later with direct user feedback.

Of course, this approach runs into a lot of practical issues. For instance, there is an installed user base who may rebel when confronted with change (although if you provide an irreplaceable device/service, people will complain but still be your customer). Additionally, once the company is successful, it has the dual role of providing an improved future experience and maintaining the current experience, splitting resources and attention. For this reason, companies often find it hard to actually follow through on step 2 of the plan where step one is “get customers” and step two is “make product better for customers.” In this phase, iterative refinements of the product design get bogged down in new features, and there is no time for conducting full-spectrum user research.

Based on these factors, I do wonder, outside of giant corporations or products with decade-spanning development (such as aircraft, medical technologies or anything the government watches over), are we likely to see a rapid decline in user research in innovative product designs, and in early product development for most products? My intuition is that we will see an increase in demand for practitioners capable of research, design and implementation, but with less specialized training in user research and user centered design. The only “concrete” evidence I can back it up with is my anecdotal observation that the majority of interesting opportunities for user research I’ve found have been specifically requesting a developer/engineer with the ability to conduct research or complete designs in addition to implementing them.

* Products such as netbooks, iPads, iPods and smartphones are as expensive as appliances we used to expect 10+ years of service from. The average washing machine is less than an iPad, but you can expect the iPad to be out of date in ~ 2 years. People would be up in arms if their washing machines (or even microwaves, at 1/4 the price) stopped performing after 2 years.

Travis Bowles, M.S., is a usability consultant in San Francisco specializing in enterprise software, novel consumer electronics, and web interfaces.

(post photo credit: flickr user raneko)

Automation Issues Hit the Big Time on NPR

NPR brings home the safety issues of too much cockpit automation.

From the NPR story:

“It was a fairly busy time of the day. A lot of other airliners were arriving at the same time, which means that air traffic control needed each of us on very specific routes at specific altitudes, very specific air speeds, in order to maintain this smooth flow of traffic,” he says.

So, air traffic control told him to fly a particular course. He and the other pilot flying the jet set the flight automation system to do it.

“What I anticipated the aircraft to do was to continue this descent,” he says. “Well instead, the aircraft immediately pitched up, very abruptly and much to my surprise. And both of us reached for the yoke, going, ‘What’s it doing?’ and there’s that shock of, ‘Why did it do that, what’s it going to do next?’ “

We’ve posted on this topic before, when we discussed Dr. Sethmudnaven’s work and Dr. Sanchez’s work. For more cutting-edge automation failure research, watch these labs:

If your lab should be listed and isn’t, send me an email!

Unintended Consequences of Design: Keyless Ignition Revisited

Peter Hancock, writing in the January issue of The Ergonomist, writes about the hidden dangers imposed by rapidly advancing automotive technology (noise, vibration suppression, keyless ignition).  Noise, vibration, sound, and the mechanical key provides useful information that the car is still on.  Removing these cues could result in mode errors:

In previous generations of vehicles, leaving the car ‘on’ as you exit tends also to provide a series of visual, auditory and even tactile kinesthetic cues as to its status. Old-time vehicles tended to make a considerable noise, their exhaust was often visible and the whole vehicle tended to vibrate noticeably while the engine was on. Over the immediate past decades, designers and engineers have sought ways to reduce these sources of disturbance since they were perceived as being perhaps unpleasant.
However, these nominally adverse effects contained problematic yet important informational content. Modern vehicles now rarely belch smoke from the exhaust. Efforts have also been very successful at reducing both noise and vibration such that modern vehicles have now indeed become whisper quiet.
It might, initially seem that leaving your engine running is more of an inconvenience than a significant threat. This is simply incorrect. The cases in the United States which have so far accrued from this form of design-induced error have been fatal.
A vehicle ‘running’ in an enclosed space with direct access for the exhaust to the airflow into your house is indeed a deadly trap. Sadly, a number of individuals now appear to have fallen into that trap.  This example may be one of these adverse but unintentional design outcomes.
There does not appear to be an online copy so I’m attaching the PDF here (thanks Rick!)
(post image from flickr user IceNineJon)

False Alarms in the Hospital

NPR pointed me to a two-series in the Boston Globe examining the incessant din of patient alarms.

The monitor repeatedly sounded an alarm — a low-pitched beep. But on that January night two years ago, the nurses at St. Elizabeth’s Medical Center in Brighton didn’t hear the alarm, they later said. They didn’t discover the patient had stopped breathing until it was too late.

These were just two of more than 200 hospital patients nation wide whose deaths between January 2005 and June 2010 were linked to problems with alarms on patient monitors that track heart function, breathing, and other vital signs, according to an investigation by The Boston Globe. As in these two instances, the problem typically wasn’t a broken device. In many cases it was because medical personnel didn’t react with urgency or didn’t notice the alarm.

They call it “alarm fatigue.’’ Monitors help save lives, by alerting doctors and nurses that a patient is — or soon could be — in trouble. But with the use of monitors rising, their beeps can become so relentless, and false alarms so numerous, that nurses become desensitized — sometimes leaving patients to die without anyone rushing to their bedside.

This is a very well-studied topic in human-automation interaction research.  We can understand why false alarms are so prevalent in healthcare settings:  if you were hooked up to a patient monitoring device, would you rather have a) the machine miss some important change but not beep so frequently (low false alarm rate + high miss rate) or b) constantly beep to let you know of the possibility that something is wrong but also be wrong frequently (high false alarm rate + low miss rate).  You’d probably pick option b because of the inherent risk in missing a life-threatening critical event.

But, as research has shown in the past (and the linked articles demonstrate), a high false alarm rate can have very detrimental effects on the person monitoring the alarm.  Keep in mind: the nurses in the story DO NOT WANT to ignore the alarms!  The article walks the fine line in blaming the user (it doesn’t quite do that).  The sheer number of alarms makes it difficult for nurses and other healthcare workers to differentiate true critical events from false alarms.

The general topic of automation in healthcare is a topic that I’ve recently dipped my toes into and it’s fascinating and complex.  Here are some papers on the topic of false alarms and how operators/users are affected.

Dixon, S., Wickens, C. D., & McCarley, J. S.  (2007).  On the independence of compliance and reliance: Are automation false alarms worse than misses?  Human Factors, 49(4), 564-572.

Meyer, J.  (2001).  Effects of warning validity and proximity on responses to warnings.  Human Factors, 43(4), 563-572.

(photo: flickr user moon_child; CC by-NC 2.0)

Honesty Hurts (especially when design is poor)

I enjoy the mix of economics and psychology, which is why I am a faithful reader of the Freakanomics blog. Their recent podcast on “pain” started off with a good human-factors-related tale of the problematic design of a subway alarm system. I have included a link below to the podcast, but the quick overview is that there is an ear piercing alarm that is triggered by using the “emergency” exit, which is invariably used every day by someone wanting to get out faster than turnstiles permit.

The person breaking the rules has to hear the alarm for the shortest period of time and face no repercussions. The law abiding citizens waiting in line to exit get to listen to the alarm the longest.

Link to the podcast

Photo Credit Wavebreaker @ Flickr

Designer of movie UIs to design real UIs

We’ve discussed Mark Coleran before with his fantastical work with those fake user interfaces you see in movies (see movie below).   According to this Fast Company blog post he will have a hand in designing real interfaces.

But Coleran doesn’t just throw out the rule books on user experience and “human interface guidelines.” In fact, because many of his clients know his movie work, he spends a lot of time talking them out of doing something like Children of Men or The Bourne Ultimatum. “One of my biggest frustations is when people will say, ‘We have these specifications and requirements, now execute it just like we saw in the movie,'” he says. “What they don’t realize is that the requirements for those movie FUIs were completely unlike the ones that they’re dealing with. In a movie, you see an interface for at most a couple of seconds. In real life, every design decision has a consequence, and it doesn’t go away. It’s there day in and day out. Those human interface guidelines are there for very good reasons.”

Coleran Reel 2008.06 HD from Mark Coleran on Vimeo.

Trashcan Affordances

The picture above shows the front and back of a trashcan designed to be lifted by machinery.

This past weekend I helped my parents start to clear their home for an upcoming move and filled this trashcan to capacity. I didn’t want my mother to have to haul it to the street, so I went to go do that before I left.

I looked at the front of the can and saw the metal bar (left picture). I looked at the can and saw the wheels were on the other side and thought “I guess I grab the bar so it tilts onto the wheels.” When I did it pretty much instantly tipped over and dumped out all of the items it contained (many of which weren’t in bags, since I knew it was dumped by a machine into the truck).

Because, of course, you should grab the handles at the top (right picture) and lean the can toward you on the wheels. But I saw the bar, so I did what you do with a bar and grabbed it. I learned two things: 1. I don’t know much about trashcans (lucky me?) and 2. Sometimes affordances are for non-humans… that bar was to be grabbed by the dumptruck machine, not me!

Kitchen Taskonomy Part 2: Paying Bills (A Guest Post by Kim Wolfinbarger)

In my previous post, I talked about applying taskonomy to kitchen organization. Instead of organizing objects by their name or physical similarity–taxonomy—a taskonomic approach organizes objects by the way they are used.

Today I’m discussing how I used taskonomy to revamp my overly precise but neglected system for paying bills. Paying bills used to be a real chore.  (Yes, I hear you saying that I could solve my problems by signing up for online bill payment. I have several reasons for handling this the old-fashioned way, one of which is my aversion to having money automatically removed from my bank account.)

First, I collected bills from their basket, the checkbook from my purse, and pens, stamps, and mailing labels from the drawer. Inevitably I forgot to get envelopes for the annoying bills that require me to use my own, so I always had to make a trip back to the desk to retrieve those. (I don’t usually pay bills at the desk, since it’s full of computer, but the home-office setup is a subject for a different post.) Simply collecting the materials took five minutes. After spending the next 30 minutes writing checks, I placed the statements in separate files: one for utility bills, another for mortgage payments, one for auto insurance and separate one for home insurance, one for each of the credit cards, and one for each of our investment accounts. But it took so long to file them that I was much more likely to stack them in a pile and file them later. Much, much later. The piles did nothing for my home decor, and when I needed to find a particular statement, it was never with the others. Twice a year, in desperation, my husband and I would sort through the stacks, put the statements in order, gripe about the missing ones which were inevitably the Most Important for Tax Purposes, haul three bags of trash to the dumpster, and consider calling a marriage therapist.

After yet another marathon file-and-shred session, I finally admitted that the system required more time and self-discipline than I had. Just as taskonomy had brought order to my kitchen, I suspected that it would also work for bill-paying. Inspiration came in the form of Marla Cilley‘s book, Sink Reflections. Following her suggestion for a “portable office,” I bought a plastic accordion file with thirteen dividers and a deeper front pocket. In the front pocket, I placed a pen, return-address stickers, and blank envelopes. A smaller insert pocket held stamps. Just behind that section, I placed my mortgage-coupon book and bills to be paid. I labeled the other 12 sections by month. Finally, I wrote on an index card a monthly checklist of the regular bills. Now, when I am ready to pay bills, everything is in one place. I can pay bills at the kitchen table or while waiting to pick up my kids from ball practice. Once a bill is paid, I check that item off the list and file the statement in one of the monthly slots. I never lose a statement, and my husband can find the ones he needs without help from me. At the end of the year, I throw away the statements I no longer need and file the others in the official cabinet.

Like the taskonomic pantry arrangement, this system for organizing bills has worked for over two years. So this January, instead of spending a day sorting and shredding, I’ll be seeking new projects for taskonomic redesign.

Kim Wolfinbarger is the recruitment coordinator and an adjunct instructor for the School of Industrial Engineering, University of Oklahoma. Her research interests include usability, product design, industrial ergonomics and design for special populations.