NPR pointed me to a two-series in the Boston Globe examining the incessant din of patient alarms.
The monitor repeatedly sounded an alarm — a low-pitched beep. But on that January night two years ago, the nurses at St. Elizabeth’s Medical Center in Brighton didn’t hear the alarm, they later said. They didn’t discover the patient had stopped breathing until it was too late.
These were just two of more than 200 hospital patients nation wide whose deaths between January 2005 and June 2010 were linked to problems with alarms on patient monitors that track heart function, breathing, and other vital signs, according to an investigation by The Boston Globe. As in these two instances, the problem typically wasn’t a broken device. In many cases it was because medical personnel didn’t react with urgency or didn’t notice the alarm.
They call it “alarm fatigue.’’ Monitors help save lives, by alerting doctors and nurses that a patient is — or soon could be — in trouble. But with the use of monitors rising, their beeps can become so relentless, and false alarms so numerous, that nurses become desensitized — sometimes leaving patients to die without anyone rushing to their bedside.
This is a very well-studied topic in human-automation interaction research. We can understand why false alarms are so prevalent in healthcare settings: if you were hooked up to a patient monitoring device, would you rather have a) the machine miss some important change but not beep so frequently (low false alarm rate + high miss rate) or b) constantly beep to let you know of the possibility that something is wrong but also be wrong frequently (high false alarm rate + low miss rate). You’d probably pick option b because of the inherent risk in missing a life-threatening critical event.
But, as research has shown in the past (and the linked articles demonstrate), a high false alarm rate can have very detrimental effects on the person monitoring the alarm. Keep in mind: the nurses in the story DO NOT WANT to ignore the alarms! The article walks the fine line in blaming the user (it doesn’t quite do that). The sheer number of alarms makes it difficult for nurses and other healthcare workers to differentiate true critical events from false alarms.
The general topic of automation in healthcare is a topic that I’ve recently dipped my toes into and it’s fascinating and complex. Here are some papers on the topic of false alarms and how operators/users are affected.
Dixon, S., Wickens, C. D., & McCarley, J. S. (2007). On the independence of compliance and reliance: Are automation false alarms worse than misses? Human Factors, 49(4), 564-572.
Meyer, J. (2001). Effects of warning validity and proximity on responses to warnings. Human Factors, 43(4), 563-572.
(photo: flickr user moon_child; CC by-NC 2.0)