Category Archives: usability

Designing the technology of ‘Blade Runner 2049’

The original Bladerunner is my favorite movie and can be credited as sparking my interest in human-technology/human-autonomy interactions.  The sequel is fantastic if you have not seen it (I’ve seen it twice already and soon a third).

If you’ve seen the original or sequel, the representations of incidental technologies may have seemed unusual.  For example, the technologies feel like a strange hybrid of digital/analog systems, they are mostly voice controlled, and the hardware and software has a well-worn look.  Machines also make satisfying noises as they are working (also present in the sequel).  This is a refreshing contrast to the super clean, touch-based, transparent augmented reality displays shown in other movies.

This really great post/article from Engadget [WARNING CONTAINS SPOILERS] profiles the company that designed the technology shown in the movie Bladerunner 2049.  I’ve always been fascinated by futuristic UI concepts shown in movies.  What is the interaction like?  Information density? Multi-modal?  Why does it work like that and does it fit in-world?

The article suggests that the team really thought deeply about how to portray technology and UI by thinking about the fundamentals (I would love to have this job):

Blade Runner 2049 was challenging because it required Territory to think about complete systems. They were envisioning not only screens, but the machines and parts that would made them work.

With this in mind, the team considered a range of alternate display technologies. They included e-ink screens, which use tiny microcapsules filled with positive and negatively charged particles, and microfiche sheets, an old analog format used by libraries and other archival institutions to preserve old paper documents.

 

Calling the Media out for Misleading InfoViz

I was reading an article on my local news today and saw this graphic, apparently made for the article.

screenshot-2016-12-10-at-12-14-30-pm
Being from Alabama, and just a pattern-recognition machine in general, I immediately noticed it was an anomaly. The lightest pink surrounded on all sides by the darkest red? Unlikely. The writer helpfully provided a source though, from the FBI, so I could look at the data myself.

screenshot-2016-12-10-at-12-19-42-pm

There, right at the start, is a footnote for Alabama. It says “3 Limited supplemental homicide data were received.” Illinois is the only other state with a footnote, but because it’s not so different from its neighbors, it didn’t stand out enough for me to notice.

Florida was not contained in the FBI table and thus is grey – a good choice to show there were no data for that state. But as for Alabama and Illinois, it’s misleading to include known bad data in a graph that has no explanations. They should also be grey, rather than imply the limited information is the truth.

I looked up similar data from other sources to check how misleading the graphic was. Because wouldn’t it be nice if my home state had figured out some magic formula for preventing firearm deaths? Unfortunately, The Centers for Disease Control (CDC) statistics on gun deaths put Alabama in the top 4 for the most gun deaths. That’s quite the opposite of the optimism-inducing light pink in the first graphic. The graph below is for 2014 while the first graphic is for 2013, but in case you might be thinking there was some change, I also looked up 2012 (the CDC appears to publish data every two years). The CDC put firearm deaths per person in Alabama even higher that year than in 2014.
screenshot-2016-12-10-at-12-25-54-pm

In closing, I don’t think this graphic was intentionally misleading. Sure, there are plenty of examples where I would be happy to accuse malice instead of bad design. Most times it’s probably just people working under a deadline or with software tools that don’t allow custom corrections. We do have to be careful – I’d hate to see Alabama not receive aid to curb their firearm death rate based on poor information visualizations.

Institutional Memory, Culture, & Disaster

I admit a fascination for reading about disasters. I suppose I’m hoping for the antidote. The little detail that will somehow protect me next time I get into a plane, train, or automobile. A gris-gris for the next time I tie into a climbing rope. Treating my bike helmet as a talisman for my commute. So far, so good.

As human factors psychologists and engineers, we often analyze large scale accidents and look for the reasons (pun intended) that run deeper than a single operator’s error. You can see some of my previous posts on Wiener’s Laws, Ground Proximity Warnings, and the Deep Water Horizon oil spill.

So, I invite you to read this wonderfully detailed blog post by Ron Rapp about how safety culture can slowly derail, “normalizing deviance.”

Bedford and the Normalization of Deviance

He tells the story of a chartered plane crash in Bedford, Massachusetts in 2014, a take-off with so many skipped safety steps and errors that it seemed destined for a crash. There was plenty of time for the pilot stop before the crash, leading Rapp to say “It’s the most inexplicable thing I’ve yet seen a professional pilot do, and I’ve seen a lot of crazy things. If locked flight controls don’t prompt a takeoff abort, nothing will.” He sums up the reasons for these pilot’s “deviant” performance via Diane Vaughn’s factors of normalization (some interpretation on my part, here):

  • If rules and checklists and regulations are difficult, tedious, unusable, or interfere with the goal of the job at hand, they will be misused or ignored.
  • We can’t treat top-down training or continuing education as the only source of information. People pass on shortcuts, tricks, and attitudes to each other.
  • Reward the behaviors you want. But we tend to punish safety behaviors when they delay secondary (but important) goals, such as keeping passengers happy.
  • We can’t ignore the social world of the pilots and crew. Speaking out against “probably” unsafe behaviors is at least as hard as calling out a boss or coworker who makes “probably” racist or sexist comments. The higher the ambiguity, the less likely people take action (“I’m sure he didn’t mean it that way.” or “Well, we skipped that list, but it’s been fine the ten times so far.”)
  • The cure? An interdisciplinary solution coming from human factors psychologists, designers, engineers, and policy makers. That last group might be the most important, in that they recognize a focus on safety is not necessarily more rules and harsher punishments. It’s checking that each piece of the system is efficient, valued, and usable and that those systems work together in an integrated way.

    Thanks to Travis Bowles for the heads-up on this article.
    Feature photo from the NTSB report, photo credit to the Massachusetts Police.

    Thoughtful and Fun Interfaces in the Reykjavik City Museum

    I stopped over in Iceland on the way to a conference and popped in to the Reykjavik City Museum, not knowing what I’d find. I love the idea of technology in a museum, but I’m usually disappointed. Either the concepts are bad, the technology is silly (press a button, light some text), or it just doesn’t work, beaten into submission by armies of 4-year-olds.

    Not at the Settlement Exhibit in Reykjavik. There are two unique interfaces I want to cover, but I’ll start at the beginning with a more typical touchscreen that controlled a larger wall display. As you enter the museum, there are multiple stations for reading pages of the Sagas. These are the stories of their history, from the 9th to 11th centuries, and beautifully illustrated.
    njals_saga_miniature
    They have been scanned, so you can browse the pages (with translations) and not damage them. I didn’t have all day to spend there, but after starting some of the Sagas, I wished I had.

    Further in you see the reason for the location: the excavation of the oldest known structure in Iceland, a longhouse, is in the museum! Around it are typical displays with text and audio, explaining the structure and what life was like at that time.

    Then I moved into a smaller dark room with an attractive lit podium (see video below). You could touch it, and it controlled the large display on the wall. The display showed the longhouse as a 3-D virtual reconstruction. As you moved your finger around the circles on the podium, the camera rotated so you could get a good look at all parts of the longhouse. As you moved between circles, a short audio would play to introduce you to the next section. Each circle controlled the longhouse display, but the closer to the center the more “inside” the structure you can see. Fortunately, I found someone else made a better video of the interaction than I did:

    The last display was simple, but took planning and thought. Near the exit was a large table display of the longhouse. It was also a touch interface, where you could put your hand on the table to activate information about how parts of the house were used. Think of the challenges: when I was there, it was surrounded by 10 people, all touching it at once. We were all looking for information in different languages. It has to be low enough for everyone to see, but not so low it’s hard to touch. Overall, they did a great job.

    Be sure to do a stopover if you cross the Atlantic!

    Both videos come from Alex Martire on YouTube.

    Warning against overgeneralizing in UX

    I enjoyed this article by Matt Gallivan, Experience Research Manager at AirBnB, about the tendency of experts to overgeneralize their knowledge. I try to watch out for it in my own life: When you’re an expert at one thing, it’s so easy to think you know more than you do about other areas.

    Excerpt:

    Because if you’re a UX researcher, you do yourself and your field no favors when you claim to have all of the answers. In the current digital product landscape, UX research’s real value is in helping to reduce uncertainty. And while that’s not as sexy as knowing everything about everything, there’s great value in it. In fact, it’s critical. It also has the added bonus of being honest.

    Somewhat related, here is a fun page analyzing where and why AirBnB succeeds at usability.

    Three months with Apple Watch

    IMG_0371
    My watch face of the moment

    First, a disclaimer: this isn’t a full-on review of the Watch. There are more qualified people to review the gadget.  The newest one comes from one of the best and most thorough hardware review sites: Anandtech.

    One part of the review was particularly insightful:

    Overall, I found that the fitness component of this watch to be a real surprise. I often hear that Apple is good at making things we didn’t know we wanted, but this is probably the first time I’ve really believed that statement. Going into the review, I didn’t really realize that I wanted a solid fitness tracker on a smartwatch, but now I’m really convinced that there is value to such features.

    This has been my experience as well.  I’ve never cared to wear a fitness tracker but i’m surprised at how much I pore over the stats of my standing, activity, and workout levels.  The watch also provides a surprisingly effective level of motivation (badges & activity circles).

    My activity level (for someone who sits at a desk most of the time) has dramatically increased since the watch (see right; yellow line is when I got the watch).IMG_0369

    We used to think that smartphones were the “ubiquitous” technology but there are times I leave it behind.  The watch is always-on and there will be interesting use-cases and challenges in the future.  I’d love to start my car with my watch!

    Some other random thoughts:

    • The fitness features are great but I wish there was a better way to view my data:
      • View splits on outdoor runs
      • View all my workouts instead of looking for them in the calendar view.
    • Many reviews i’ve read assume the watch will replace the phone.  But doing any extended activity really tires the shoulders!  My interactions are really limited to much less than 5-10 seconds.
    • I notice that haptic feedback on the wrist is much less jarring and easier to dismiss (i.e., not as disruptive) as vibrating phones on the body.
    • The Apple Watch is made for travel:
      • Most airlines have applets for the watch that make it so easy to keep track of gates, departures, & arrivals.
      • Boarding a plane with your watch feels very futuristic but most pass readers are on the right side and I wear my watch on the left resulting in very awkward wrist positions.  Even when the reader was on the left, it is facing upwards requiring me to turn my wrist downwards.
    • It is unobtrusive and looks like a watch, not like a gizmo on my wrist.
    • Apple Pay belongs on the watch.  I’ve used Apple Pay on my phone but it is much more seamless on the watch.
    • Notifications are great if you pare down what can notify you.  I only get notified of VIP mail (select senders) and text messages.
    • Controlling my thermostat, and other electrical devices from my wrist is pretty great.

    Psychology Podcasts

    Those who know me know I am a fiend for podcasts. Since I’m also a fiend for psychology, I can’t help but notice when it pops up in a podcast, even one not focused on psychology. I use many of them in my courses: for example, the This American Life episode on what having schizophrenia sounds like is a must listen when I hit the Abnormal Psychology chapter in Intro. The Radiolab Memory and Forgetting is a staple in my Cognitive class and I take advantage of the multi-disciplinarity of Human Factors to play clips from every area. Startup had a good one that illustrates what human factors looks like to a client.

    Over the years I’ve compiled a list of my favorites relating to psychology. Some are clips from longer podcasts while some are dedicated to psychology (e.g., Invisibilia). Each one has a general area of psychology noted (although some hit two or more areas) and if it’s a clip I put the start and end time of the most related audio.

    I hope you enjoy the resource and I will keep updating it as I find more. If you know of any I don’t have listed, please link to it in the comments for the blog and I’ll add it to the spreadsheet.

    Parking sign re-design

    I’ll be the first to admit that I experience cognitive overload while trying to park. When there are three signs and the information needs to be combined across them, or at least each one needs to be searched, considered, and eliminated, I spend a lot of time blocking the street trying to decide if I can park.

    For example, there might be a sign that says “No parking school zone 7-9am and 2-4pm” combined with a “2 hour parking only without residential permit 7am-5pm” and “< —-Parking” to indicate the side of the sign that’s open. It’s a challenge to figure out where and how long I can park at 1pm or what happens at 7pm.

    Designer Nikki Sylianteng created new signs for parking in Los Angeles that incorporated all information into a single graphic.

    http://nikkisylianteng.com/project/parking-sign-redesign/
    http://nikkisylianteng.com/project/parking-sign-redesign/

    I still have some difficulty in going back and forth to the legend at the bottom, but probably just because I’ve never seen the signs before. Otherwise, one just needs to know the time and day of the week.

    An interview with her can be found in the LA Weekly where she describes mocking up a laminated example in NY and asking people for feedback on the street via sharpies. (Yay for paper prototypes!) An NPR story focused on the negative reactions of a few harried LA denizens, who predictably said “I like how it was,” but I’d like to see some timed tests of interpreting if it’s ok to park. I’d also like to suggest using a dual-task paradigm to put parkers under the same cognitive load in the lab as they might experience on the street.

    As for NY parking signs – I still can’t parse them.

    The Square Cash Disappearing Act

    Square Cash is a great service – it allows you to send money via an email with no service charge if you’re using your debit card. You can receive money without entering a PIN. I use it all the time to divide up restaurant bills among my friends. That said, I found a usability issue yesterday that I wanted to share.

    I needed to link my debit card to the app, so I followed their very simple instructions for entry. 

    The first screen asks for the card number. The number pad is telephone-order rather than number pad-order. This is on a phone, so that makes sense even if I’m much more used to entering these numbers using a keyboard.

    IMG_4841

    Next, the expiration date. On my card, the expiration date is 09/16/2016*, so I start to enter it.

    IMG_4842

    Here is the screen as you start to enter the date:
    IMG_4843

    I then proceeded to enter 09/16 as I looked at my card, then the CCV, and got an error message about an incorrect card number. Tried again. Same. Did this four times before I realized that the expiration date was month/year. It isn’t as though I’d never seen this, or been asked to enter just the month and year from a card, so I thought hard about what tricked me.

    I concluded it was the difference between the second and third screens – the guidance is there before you start typing, but as soon as you put in any number for the date, the guidance disappears. Since I was looking down at my card, I just entered what I saw and didn’t think enough to check – especially since it called for ##/##, which matched the month and day on my card, not ##/####, which could only be a month and year.

    You are welcome to blame the user for this one, but it would be a small fix to keep the background guide visible during entry.

    *No, I’m not dumb enough to put my real card number or expiration date in the pictures for this post. 🙂