CREATE (www.create-center.org ) is offering one on Designing for Older Adults. The workshop will present guidelines and best practices for designing for older adults. Topics include: Existing & Emerging Technologies, Usability Protocols, Interface & Instructional Design, and design for Social Engagement, Living Environments, Healthcare, Transportation, Leisure, and Work. Each participant will receive a complimentary copy of CREATE’s book Designing for Older Adults, 3rd Ed., winner of the Richard Kalish Innovative Publication Award (2019), and a USB with CREATE publications and tools.
Additional information on the workshop can be found below.
The third edition of the definitive source for information for designing for older adults has been published:
This new edition provides easily accessible and usable guidelines for practitioners in the design community for older adults. It includes an updated overview of the demographic characteristics of older adult populations and the scientific knowledge base of the aging process relevant to design. New chapters include Existing and Emerging Technologies, Work and Volunteering, Social Engagement, and Leisure Activities. Also included is basic information on user-centered design and specific recommendations for conducting research with older adults.
A 20% discount is available by using code ‘A004‘ at checkout from CRC Press.
The focus of this workshop is to bring together representatives from companies, organizations, universities, large and small, who are involved in industry, product development, or research who have an interest in meeting the needs of older adults. Additionally, members of the CREATE team will present guidelines and best practices for designing for older adults. Topics include; Existing & Emerging Technologies, Usability Protocols, Interface & Instructional Design, Technology in Social Engagement, Living Environments, Healthcare, Transportation, Leisure, and Work. Each participant will receive a complimentary copy of our book Designing for Older Adults.
If you would like a registration form or any further information on the conference accommodations, please contact Adrienne Jaret at: firstname.lastname@example.org or by phone at (646) 962-7153.
New NPR story on the non-usability of ballots, voting software, and other factors affecting our elections:
New York City’s voters were subject to a series of setbacks after the election board unrolled a perforated two-page ballot. Voters who didn’t know they had to tear at the edges to get at the entire ballot ended up skipping the middle pages. Then the fat ballots jammed the scanners, long lines formed, and people’s ballots got soaked in the rain. When voters fed the soggy ballots into scanners, more machines malfunctioned.
In Georgia, hundreds blundered on their absentee ballot, incorrectly filling out the birth date section. Counties originally threw out the ballots before a federal judge ordered they be counted.
And in Broward County, Fla., 30,000 people who voted for governor skipped the contest for U.S. Senate. The county’s election board had placed that contest under a block of multi-lingual instructions, which ran halfway down the page. Quesenbery says voters scanning the instructions likely skimmed right over the race.
She has seen this design before. In 2009, King County, Wash., buried a tax initiative under a text-heavy column of instructions. An estimated 40,000 voters ended up missing the contest, leading the state to pass a bill mandating ballot directions look significantly different from the contests below.
“We know the answers,” says Quesenbery. “I wish we were making new mistakes, not making the same old mistakes.”
The story didn’t even mention the issues with the “butterfly ballot” from Florida in 2000. Whitney Queensbery is right. We do know the answers, and we certainly know the methods for getting the answers. We need the will to apply them in our civics, not just commercial industry.
Right after the Hawaii false nuclear alarm, I posted about how the user interface seemed to contribute to the error. At the time, sources were reporting it as a “dropdown” menu. Well, that wasn’t exactly true, but in the last few weeks it’s become clear that truth is stranger than fiction. Here is a run-down of the news on the story (spoiler, every step is a human factors-related issue):
Hawaii nuclear attack alarms are sounded, also sending alerts to cell phones across the state
Alarm is noted as false and the state struggles to get that message out to the panicked public
The actual interface is found and shown – rather than a drop-down menu it’s just closely clustered links on a 1990s-era website-looking interface that say “DRILL-PACOM(CDW)-STATE ONLY” and “PACOM(CDW)-STATE ONLY”
Latest news: the employee who sounded the alarm says it wasn’t an error, he heard this was “not a drill” and acted accordingly to trigger the real alarm
The now-fired employee has spoken up, saying he was sure of his actions and “did what I was trained to do.” When asked what he’d do differently, he said “nothing,” because everything he saw and heard at the time made him think this was not a drill. His firing is clearly an attempt by Hawaii to get rid of a ‘bad apple.’ Problem solved?
It seems like a good time for my favorite reminder from Sidney Dekker’s book, “The Field Guide to Human Error Investigations” (abridged):
To protect safe systems from the vagaries of human behavior, recommendations typically propose to:
• Tighten procedures and close regulatory gaps. This reduces the bandwidth in which people operate. It leaves less room for error.
• Introduce more technology to monitor or replace human work. If machines do the work, then humans can no longer make errors doing it. And if machines monitor human work, they ca
snuff out any erratic human behavior.
• Make sure that defective practitioners (the bad apples) do not contribute to system breakdown again. Put them on “administrative leave”; demote them to a lower status; educate or pressure them to behave better next time; instill some fear in them and their peers by taking them to court or reprimanding them.
In this view of human error, investigations can safely conclude with the label “human error”—by whatever name (for example: ignoring a warning light, violating a procedure). Such a conclusion and its implications supposedly get to the causes of system failure.
AN ILLUSION OF PROGRESS ON SAFETY The shortcomings of the bad apple theory are severe and deep. Progress on safety based on this view is often a short-lived illusion. For example, focusing on individual failures does not take away the underlying problem. Removing “defective” practitioners (throwing out the bad apples) fails to remove the potential for the errors they made.
…[T]rying to change your people by setting examples, or changing the make-up of your operational workforce by removing bad apples, has little long-term effect if the basic conditions that people work under are left unamended.
A ‘bad apple’ is often just a scapegoat that makes people feel better by giving a focus for blame. Real improvements and safety happen by improving the system, not by getting rid of employees who were forced to work within a problematic system.
The morning of January 13th, people in Hawaii received a false alarm that the island was under nuclear attack. One of the messages people received was via cell phones and it said:“BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” Today, the Washington Post reported that the alarm was due to an employee pushing the “wrong button” when trying to test the nuclear alarm system.
To sum up the issue, the alarm is triggered by choosing an option in a drop down menu, which had options for “Test missile alert” and “Missile alert.” The employee chose the wrong dropdown and, once chosen, the system had no way to reverse the alarm.
A nuclear alarm system should be subjected to particularly high usability requirements, but this system didn’t even conform to Nielson’s 10 heuristics. It violates:
User control and freedom: Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
And those are just the ones I could identify from reading the Washington Post article! Perhaps a human factors analysis will become regulated for these systems as it has been for the FDA and medical devices.
The original Bladerunner is my favorite movie and can be credited as sparking my interest in human-technology/human-autonomy interactions. The sequel is fantastic if you have not seen it (I’ve seen it twice already and soon a third).
If you’ve seen the original or sequel, the representations of incidental technologies may have seemed unusual. For example, the technologies feel like a strange hybrid of digital/analog systems, they are mostly voice controlled, and the hardware and software has a well-worn look. Machines also make satisfying noises as they are working (also present in the sequel). This is a refreshing contrast to the super clean, touch-based, transparent augmented reality displays shown in other movies.
The article suggests that the team really thought deeply about how to portray technology and UI by thinking about the fundamentals (I would love to have this job):
Blade Runner 2049 was challenging because it required Territory to think about complete systems. They were envisioning not only screens, but the machines and parts that would made them work.
With this in mind, the team considered a range of alternate display technologies. They included e-ink screens, which use tiny microcapsules filled with positive and negatively charged particles, and microfiche sheets, an old analog format used by libraries and other archival institutions to preserve old paper documents.
I was reading an article on my local news today and saw this graphic, apparently made for the article.
Being from Alabama, and just a pattern-recognition machine in general, I immediately noticed it was an anomaly. The lightest pink surrounded on all sides by the darkest red? Unlikely. The writer helpfully provided a source though, from the FBI, so I could look at the data myself.
There, right at the start, is a footnote for Alabama. It says “3 Limited supplemental homicide data were received.” Illinois is the only other state with a footnote, but because it’s not so different from its neighbors, it didn’t stand out enough for me to notice.
Florida was not contained in the FBI table and thus is grey – a good choice to show there were no data for that state. But as for Alabama and Illinois, it’s misleading to include known bad data in a graph that has no explanations. They should also be grey, rather than imply the limited information is the truth.
I looked up similar data from other sources to check how misleading the graphic was. Because wouldn’t it be nice if my home state had figured out some magic formula for preventing firearm deaths? Unfortunately, The Centers for Disease Control (CDC) statistics on gun deaths put Alabama in the top 4 for the most gun deaths. That’s quite the opposite of the optimism-inducing light pink in the first graphic. The graph below is for 2014 while the first graphic is for 2013, but in case you might be thinking there was some change, I also looked up 2012 (the CDC appears to publish data every two years). The CDC put firearm deaths per person in Alabama even higher that year than in 2014.
I admit a fascination for reading about disasters. I suppose I’m hoping for the antidote. The little detail that will somehow protect me next time I get into a plane, train, or automobile. A gris-gris for the next time I tie into a climbing rope. Treating my bike helmet as a talisman for my commute. So far, so good.
He tells the story of a chartered plane crash in Bedford, Massachusetts in 2014, a take-off with so many skipped safety steps and errors that it seemed destined for a crash. There was plenty of time for the pilot stop before the crash, leading Rapp to say “It’s the most inexplicable thing I’ve yet seen a professional pilot do, and I’ve seen a lot of crazy things. If locked flight controls don’t prompt a takeoff abort, nothing will.” He sums up the reasons for these pilot’s “deviant” performance via Diane Vaughn’s factors of normalization (some interpretation on my part, here):
If rules and checklists and regulations are difficult, tedious, unusable, or interfere with the goal of the job at hand, they will be misused or ignored.
We can’t treat top-down training or continuing education as the only source of information. People pass on shortcuts, tricks, and attitudes to each other.
Reward the behaviors you want. But we tend to punish safety behaviors when they delay secondary (but important) goals, such as keeping passengers happy.
We can’t ignore the social world of the pilots and crew. Speaking out against “probably” unsafe behaviors is at least as hard as calling out a boss or coworker who makes “probably” racist or sexist comments. The higher the ambiguity, the less likely people take action (“I’m sure he didn’t mean it that way.” or “Well, we skipped that list, but it’s been fine the ten times so far.”)
The cure? An interdisciplinary solution coming from human factors psychologists, designers, engineers, and policy makers. That last group might be the most important, in that they recognize a focus on safety is not necessarily more rules and harsher punishments. It’s checking that each piece of the system is efficient, valued, and usable and that those systems work together in an integrated way.
Thanks to Travis Bowles for the heads-up on this article.
Feature photo from the NTSB report, photo credit to the Massachusetts Police.
I stopped over in Iceland on the way to a conference and popped in to the Reykjavik City Museum, not knowing what I’d find. I love the idea of technology in a museum, but I’m usually disappointed. Either the concepts are bad, the technology is silly (press a button, light some text), or it just doesn’t work, beaten into submission by armies of 4-year-olds.
Not at the Settlement Exhibit in Reykjavik. There are two unique interfaces I want to cover, but I’ll start at the beginning with a more typical touchscreen that controlled a larger wall display. As you enter the museum, there are multiple stations for reading pages of the Sagas. These are the stories of their history, from the 9th to 11th centuries, and beautifully illustrated.
They have been scanned, so you can browse the pages (with translations) and not damage them. I didn’t have all day to spend there, but after starting some of the Sagas, I wished I had.
Further in you see the reason for the location: the excavation of the oldest known structure in Iceland, a longhouse, is in the museum! Around it are typical displays with text and audio, explaining the structure and what life was like at that time.
Then I moved into a smaller dark room with an attractive lit podium (see video below). You could touch it, and it controlled the large display on the wall. The display showed the longhouse as a 3-D virtual reconstruction. As you moved your finger around the circles on the podium, the camera rotated so you could get a good look at all parts of the longhouse. As you moved between circles, a short audio would play to introduce you to the next section. Each circle controlled the longhouse display, but the closer to the center the more “inside” the structure you can see. Fortunately, I found someone else made a better video of the interaction than I did:
The last display was simple, but took planning and thought. Near the exit was a large table display of the longhouse. It was also a touch interface, where you could put your hand on the table to activate information about how parts of the house were used. Think of the challenges: when I was there, it was surrounded by 10 people, all touching it at once. We were all looking for information in different languages. It has to be low enough for everyone to see, but not so low it’s hard to touch. Overall, they did a great job.
Be sure to do a stopover if you cross the Atlantic!
I enjoyed this article by Matt Gallivan, Experience Research Manager at AirBnB, about the tendency of experts to overgeneralize their knowledge. I try to watch out for it in my own life: When you’re an expert at one thing, it’s so easy to think you know more than you do about other areas.
Because if you’re a UX researcher, you do yourself and your field no favors when you claim to have all of the answers. In the current digital product landscape, UX research’s real value is in helping to reduce uncertainty. And while that’s not as sexy as knowing everything about everything, there’s great value in it. In fact, it’s critical. It also has the added bonus of being honest.