Some of the blame for our current financial crisis lies in the opacity of legal documents. In this post, a mortgage statement gets a facelift to become easily interpretable and allow the homeowner to predict and well as understand the payment schedule.
This episode covers the difficulty people have in correctly miming use of a steering wheel (spoiler: they can’t!) and how they can learn to do so correctly with no visual feedback. The researcher interviewed was Steven Cloete, whose website can be found here with more information about research specifics.
99% invisible was recently featured on Radiolab, one of my favorite science podcasts.
One thing that annoys me is the silly argument that paper is bad or paper kills. Such hollow arguments are used to encourage technology adoption in airplane cockpits, the class room, and hospitals. Usually they are associated with silly statistics about how much paper is saved or how much less weight is carried, or how much easier it will be to look through documents (I use an iPad to hold hundreds of articles and while I can *hold* more articles, it has not translated to more reading and it does not improve my reading comprehension at all).
Hospitals and doctors’ offices, hoping to curb medical error, have invested heavily to put computers, smartphones and other devices into the hands of medical staff for instant access to patient data, drug information and case studies.
But like many cures, this solution has come with an unintended side effect: doctors and nurses can be focused on the screen and not the patient, even during moments of critical care. And they are not always doing work; examples include a neurosurgeon making personal calls during an operation, a nurse checking airfares during surgery and a poll showing that half of technicians running bypass machines had admitted texting during a procedure.
This NYT article discusses the “new” scourge of rude people interacting with their phones in public via voice thanks in large part to Siri, Apple’s new virtual assistant.
This article reminded me of something slightly different about human interaction with virtual assistants or automation. In a 2004 paper, researchers Parasuraman and Miller wondered if automation that possessed human-like qualities would cause people to alter their behavior.
They compared automation that made suggestions in a polite way or a rude way (always interrupting you). As you might expect, automation that was polite elicited higher ratings of trust and dependence.
This might be one reason why Siri has a playful, almost human-like personality instead of a robot servant that merely carries out your commands. The danger is that with assistants that are perceived as human-like, people will raise their expectations to unreasonable levels. Like mistakenly ascribing political motivations to it.
Lastly, the graph shown below was in the latest issue of Wired magazine. I think it’s a nice compliment to the perceived reliability graph we showed in a previous post: