I don’t know how I missed this back in March! They even use the words “human factor” in the title! The article is an interesting overview of the “NextGen” systems coming to aviation and explains our field to the general public.
Human factors’ engineering
Even amid the amazing technological achievements and wondrous capabilities of the 21st century, the most critical connection in the airline industry remains the same as it was at the birth of aviation: the human touch.
That’s where Domino comes in. Armed with a degree from George Mason University in human factors engineering, Domino studies the way humans interact with machines.
A classic task of the human factors engineer, Domino says, “is to ensure that information is being presented at the right time to a pilot and in the right form so that the human cognitive capabilities are not simply overwhelmed.”
The question is, says Domino, “What should you put in front of a pilot and in what form should that information be?”
Last week I had the pleasure of presenting in a symposium on automation in safety critical domains arranged by Dr. Arathi Sethumadhavan at the American Psychological Association annual meeting. My fellow participants were:
Arathi Sethumadhavan, PhD (Medtronic)
Poornima Madhavan, PhD (Old Dominion University)
Julian Sanchez, PhD (Medtronic)
Ericka Rovira, PhD (United States Military Academy)
Everyone presented on issues related to human-automation interaction. I do not have their permission to show their slides so this post is more generally a lay-person’s description of one aspect of automation research: consequences of perceptions of automation reliability.
One of the mostpopulartypes of news items we post is stories of when people rely too much on unreliable automation with sometimes funny or tragic consequences. For example, when people use in-car navigation/GPS systems and slavishly follow its directives without judging conditions for themselves.
This is a classic example of a mis-match between the user’s perception of how reliable the system is and how it actually is. See the figure below:
The Y-axis is how the user perceives the system’s reliability while the X-axis is the actual reliability of the system. Let’s focus on the two zones in the upper left and lower right represent. When the user perceives that the automation is more reliable than it actually is (RED CLOUD) they will over-trust the automation and perhaps rely too much on its occasionally faulty advice (this is where much of the GPS horror stories lie). People may get their mis-judgements about the reliability from many sources (marketing literature, limited use, or recommendations).
For example, my digital camera has an auto mode that claims to be able to detect many types of settings (macro, landscape, night) and automatically adjust settings to suit. However, in practice it seems less reliable than the marketing literature suggests. The company exhorts me to TRUST iA (their name for automation)!
So in a few situations where I over-rely on iA, I end up with images that are too dim/bright, etc. The system doesn’t tell me how it came to its decision leaving me out of the loop. Now, I just don’t use iA mode.
The other zone (YELLOW CLOUD) is less studied but it represents situations where the automation is actually very reliable but people perceive it as not very reliable and so will depend on it less–even when their performance degrades as a result. Examples are more difficult to come up with but one might be the example of health aids that doctors might use to assist in diagnosis of patients.
Finally, the line in the middle is proper calibration: perceived reliability is perfectly correlated with the actual reliability of the automation. This is where we want to be most of the time. When our calibration is perfect, we will rely on the automation when we should and NOT when we shouldn’t.
Getting people to properly calibrate their trust and dependence on automation is a complex human factors psychological problem.
Drs. Kelly Caine (of guest post fame) and Dennis Morrison will be presenting on human factors considerations for the design and use of electronic health records. Audience participation is welcome as they discuss this important topic. See abstract below.
In this conversation hour we will discuss the use of electronic health records in clinical practice. Specifically, we will focus on how, when designed using human factors methods, electronic health records may be used to support evidence based practice in clinical settings. We will begin by giving a brief overview of the current state of electronic health records in use in behavioral health settings, as well as outline the potential future uses of such records. Next, we will provide an opportunity for the audience members to ask questions, thus allowing members to guide the discussion to the issues most relevant to them. At the conclusion of the session, participants will have a broader understanding of the role of electronic health records in clinical practice as well as a deeper understanding of the specific issues they face in their practice. In addition, we hope to use this conversation hour as a starting point to generate additional discussions and collaborations on the use of electronic health records in clinical practice, potentially resulting in an agenda for future research in the area of electronic health records in clinical behavioral health practice.
Kelly Caine is the Principal Reserach Scientist in the Center for Law, Ethics, and Applied Research (CLEAR) Health Information.
Can you imagine the horror of food companies once they realize how much of their treemap has to say SUGAR? This visualization is certainly easier than the rule of thumb I was taught: “If sugar is one of the first three ingredients, it’s a lot of sugar.” Just look at how easy it is to compare peanut butters in the above image.
You can go here to see all of the finalists in the label design competition.