I was reading an article on my local news today and saw this graphic, apparently made for the article.
Being from Alabama, and just a pattern-recognition machine in general, I immediately noticed it was an anomaly. The lightest pink surrounded on all sides by the darkest red? Unlikely. The writer helpfully provided a source though, from the FBI, so I could look at the data myself.
There, right at the start, is a footnote for Alabama. It says “3 Limited supplemental homicide data were received.” Illinois is the only other state with a footnote, but because it’s not so different from its neighbors, it didn’t stand out enough for me to notice.
Florida was not contained in the FBI table and thus is grey – a good choice to show there were no data for that state. But as for Alabama and Illinois, it’s misleading to include known bad data in a graph that has no explanations. They should also be grey, rather than imply the limited information is the truth.
I looked up similar data from other sources to check how misleading the graphic was. Because wouldn’t it be nice if my home state had figured out some magic formula for preventing firearm deaths? Unfortunately, The Centers for Disease Control (CDC) statistics on gun deaths put Alabama in the top 4 for the most gun deaths. That’s quite the opposite of the optimism-inducing light pink in the first graphic. The graph below is for 2014 while the first graphic is for 2013, but in case you might be thinking there was some change, I also looked up 2012 (the CDC appears to publish data every two years). The CDC put firearm deaths per person in Alabama even higher that year than in 2014.
I became interested in using “big data” for A/B testing after a speaker from RedHat gave a talk to our area about it a couple of years ago. It’s a tantalizing idea: come up with a change, send it out on some small percent of your users, and pull it back immediately if it doesn’t work or isn’t better than the original. Even more amazing when you consider a “small percent” can be thousands and thousands of people – a dream for any researcher. Certainly, this connects to last year’s news on the controversy over Facebook’s A/B testing adventures.
The only con I can think of is that if something works or doesn’t work, you may not know why. We are always fumbling toward success, but maybe it’s not good to encourage fumbling over development of theory.
I enjoyed this article by Matt Gallivan, Experience Research Manager at AirBnB, about the tendency of experts to overgeneralize their knowledge. I try to watch out for it in my own life: When you’re an expert at one thing, it’s so easy to think you know more than you do about other areas.
Because if you’re a UX researcher, you do yourself and your field no favors when you claim to have all of the answers. In the current digital product landscape, UX research’s real value is in helping to reduce uncertainty. And while that’s not as sexy as knowing everything about everything, there’s great value in it. In fact, it’s critical. It also has the added bonus of being honest.
A recently released report, done in March 2013, reveals the process of creating Healthcare.gov. Hindsight is always 20/20, but we’ve also worked hard to establish best practices for considering both engineering and the user in software development. These contributions need to be valued, especially for large scale projects. After looking through the slides, one thing I note is that even this improved approach barely mentions the end users of the website. There is one slide that states “Identify consumer paths; review and modify vignettes.” The two examples of this are users who have more or less complex needs when signing up for insurance. I don’t see any mention of involving actual users prior to release.
Consultants noted there was no clear leader in charge of this project, which we now know contributed to its disastrous release. And there was no “end-to-end testing” of its full implementation, something we now know never happened.
Some of this may fall on us, for not being convincing enough that human factors methods are worth the investment. How much would the public be willing to pay for a solid usability team to work with the website developers?
Just a short note about two new HF-oriented blogs. First, Arathi Sethumadhavan Ph.D. has started a new blog. Arathi Sethumadhavan is a Human Factors Scientist at Medtronic’s Cardiac Rhythm and Disease Management. She received her PhD in Experimental Psychology (Human Factors) from Texas Tech University. Second, Ergonomics in Design, a publication of the Human Factors and Ergonomics Society, has started a blog as well. Check ’em out!
Information Foraging Theory is a theory of human-information interaction that aims to explain and predict how people will best shape themselves to their information environments, and how information environments can best be shaped to people. The approach involves a kind of reverse engineering in which the analyst asks (a) what is the nature of the task and information environments, (b) why is a given system a good solution to the problem, and (c) how is that “ideal” solution realized (approximated) by mechanism.
Typically, the key steps in developing a model of information foraging involve: (a) a rational analysis of the task and information environment (often drawing on optimal foraging theory from biology) and (b) a computational production system model of the cognitive structure of task. I will briefly review work on individual information seeking, and then focus on how this work is being expanded to studies of information production and sense-making in technology-mediated social systems such as wikis, social tagging, social network sites, and twitter.
In recent years, we have been extending our studies to deal with social interactions on the Web (e.g., wikis, tagging systems, twitter). This has lead to studies of how people assess source credibility (expertise, trustworthiness, bias ) and how user interfaces might affect such judgments.
“A couple of days ago, a friend was asking me for a restaurant recommendation. Easy task, I thought. I had some restaurants in mind and just needed to check and see if they were open and send her the websites. What should have been a 5-minute email turned into a half-hour nightmare as I slogged through websites that are more intent on impressing me with movies, music, and other annoyances than on giving me direct information.”
“Who thinks it’s good idea to blast annoying music at people going to your site? Why do they so often rely on Flash, which doesn’t really add anything to the experience, when half the time people are looking up the site on mobile devices to get basic information? Why this bizarre preference for menus in PDF format?”
“… has a notoriously ludicrous website which – granted – may well appeal to the sort of ‘zany’ people who eat there. As for everyone else, it will probably just make you want to smash your fist through your monitor.”
Perhaps I’m still unhappy about spending an hour looking for a place to eat in Little Rock last weekend. Flash websites and PDF menus on a 2007 Sprint Treo is not for the faint of heart.