But the technology introduces its own risks: it has created new avenues for error in software and operation, and those mistakes can be more difficult to detect. As a result, a single error that becomes embedded in a treatment plan can be repeated in multiple radiation sessions.
A new linear accelerator had been set up incorrectly, and the hospital’s routine checks could not detect the error because they merely confirmed that the output had not changed from the first day.
In another case, an unnamed medical facility told federal officials in 2008 that Philips Healthcare made treatment planning software with an obscure, automatic default setting, causing a patient with tonsil cancer to be mistakenly irradiated 31 times in the optic nerve. “The default occurred without the knowledge of the physician or techs,” the facility said, according to F.D.A. records.
In a statement, Peter Reimer of Philips Healthcare said its software functioned as intended and that operator error caused the mistake.
The Times found that while this new technology allows doctors to more accurately attack tumors and reduce certain mistakes, its complexity has created new avenues for error — through software flaws, faulty programming, poor safety procedures or inadequate staffing and training.
Asked about the case, Dr. David Keys, a board member of the American College of Medical Physics, said, “It takes less than 15 seconds to collimate [cover non-scanned portions of the body – AM] a baby,” adding: “It could be that the techs at Downstate were too busy. It could be that they were just sloppy or maybe they forgot their training.”
Other problems, according to Dr. Amodio’s e-mail, included using the wrong setting on a radiological device, which caused some premature babies to be “significantly overirradiated.”
“It was a fairly busy time of the day. A lot of other airliners were arriving at the same time, which means that air traffic control needed each of us on very specific routes at specific altitudes, very specific air speeds, in order to maintain this smooth flow of traffic,” he says.
So, air traffic control told him to fly a particular course. He and the other pilot flying the jet set the flight automation system to do it.
“What I anticipated the aircraft to do was to continue this descent,” he says. “Well instead, the aircraft immediately pitched up, very abruptly and much to my surprise. And both of us reached for the yoke, going, ‘What’s it doing?’ and there’s that shock of, ‘Why did it do that, what’s it going to do next?’ “
Her work was inspired by our need to increase automation in aviation, due to increases in air traffic. However, automation does not come without costs — what happens the performance of air traffic controllers and pilots when the automation someday fails? At what point is the operator so “out of the loop” that recovery is impossible?
Sethumadhavan addressed this question by giving people different levels of automation and observing their performance after failures of the automated system. The more automated the system, the more errors occurred when that system failed.
She also measured the situation awareness of her participants in the different levels of automation — results were similar. Those who had more of their task automated had less situation awareness, and even after a system failure their awareness continued to be lower. In other words, they weren’t shocked out of complacency, as one might predict.
Sethumadhaven’s work directly contributes to understanding the human in the loop of the automated system, so that we can predict their behavior and explore design options to prevent errors due to putting the controller out of the loop.
You can read more on Dr. Sethumadhavan’s work here. Congratulations to her on this award!
Photo credit isafmedia under a Creative Commons license.
As I read it, a number of human factors issues jumped out at me, but the focus is almost entirely on engineering issues. This does not mean Volvo will ignore the human factor. After all, I’ve previously posted on their well-done instrument panels. However, it would be fun to read about how they are including human reactions, expectations, and limitations in this work.
The focus on engineering solutions is typical in discussions of safety. Yes, it’s preferable to design out the hazard if you can, but the article even points out that “Another challenge is that wireless signals can be unreliable in moving vehicles” and “Of course, a looming challenge for cars that rely on computers for their safety is that computers are not 100% reliable,” which they would address by “warn(ing) the driver if it’s not working properly.” Sounds like some research on trusting automation might be helpful!
My favorite quote was “No amount of vehicle-to-vehicle communication will help when drivers make monumental mistakes, such as driving into a tree.” Since people do not often choose to drive into trees I think it would be useful to understand why they might make such a “monumental” mistake. Perhaps swerving to avoid a child in the road? Would the system disallow such a swerve to keep the driver safe, keeping the car on the original path?
We, the drivers, will still have to interact with our zero-fatality car to keep fatalities at zero and I hope we will be heavily included in this work beyond our heights, weights, and how much impact it takes to fracture our skulls.
Photo credit sidkid under a creative commons license.
HF/Usability Potpourri returns with two recent items.
iPhone Reception Display
Reports from some sites suggest that at least some of the cellular reception issues of the new iPhone 4 are due to improper display of signal strength. This is a neat HF issue because it involves user’s trust in automation (the display of reception bars is actually a computed value, not a raw meter of actual signal strength), the design of information displays, and properly informing the user so they can set expectations. Apple is planning to tweak the way in which those bars get calculated (presumably to be less optimistic) to bring user expectations in-line with reality.
From an Apple press release:
Upon investigation, we were stunned to find that the formula we use to calculate how many bars of signal strength to display is totally wrong. Our formula, in many instances, mistakenly displays 2 more bars than it should for a given signal strength. For example, we sometimes display 4 bars when we should be displaying as few as 2 bars.
Mozilla Browser Visualization
Next, Mozilla, creators of Firefox, present some interesting visualizations of what users are clicking in Firefox. As expected, the back button is one of the most frequently clicked items (93% of all users).
Interestingly, the RSS icon in the location bar (the orange square icon used to subscribe to blogs) showed some operating system differences. Five percent of PC/Windows users clicked it, 11% of Mac users, and about 14% of Linux users. Indicative of experiential differences? PC users less aware of blogs/blog readers?
Our own analytics show that the vast majority of our readers visit from PC-based Firefox installations. As a service to our readers, here is the subscribe link to our blog 🙂
I don’t feel that I am in control of the information I share on Facebook, and of the information my friends share… FB has total control of (some of) my information, and I don’t like that.
It’s not that Yohann didn’t like Facebook–he did. He liked being able to see his friend’s latest photos and keep up with status updates. The problem was that Yohann (who is, by the way a very smart, tech savvy guy) felt unable to use the Facebook user interface to effectively maintain control of his information.
The root of this problem could be one of two things. It could be that Facebook has adopted the “evil interface” strategy (discussed by Rich previously on the human factors blog), where an interface is not designed to help a user accomplish their goals easily (a key tenet of human factors), but is instead designed to encourage (or trick) a user to behave the way the interface designer wants the user to behave (even if it’s not what the user really wants). Clearly, this strategy is problematic for a number of reasons, not the least of which from Facebook’s perspective is that users will stop using Facebook altogether if they feel tricked or not in control.
A more optimistic perspective is that the problem of privacy on Facebook is a human factors one: the privacy settings on Facebook need to be redesigned because they are currently not easy to use. Here are a few human factors issues I’ve noticed.
Lack of Feedback
In general, there is very little feedback provided to users about the privacy level of different pieces of information on their Facebook profile. For example, by default, Facebook now considers your name, profile picture, gender, current city, networks, friend list, and Pages to all be public information. However, no feedback is given to users as they enter or change this information to indicate that this is considered public information.
While Facebook did introduce a preview function which shows a preview of what information a Facebook friend would see should they visit your profile (which is a great idea!), the preview function does not provide feedback to a user about what information they are sharing publicly or with apps. For example, you can’t type “Yelp” into the preview window to see what information Facebook would share with Yelp through Facebook connect.
No Training (Instructions)
Finally, Facebook does not provide any training and only minimal instructions for users on how to manage privacy settings.
Fortunately, there are some relatively simple human factors solutions that could help users manage their privacy without writing their own Dear John letter to Facebook.
In terms of interface changes to increase feedback to users, Facebook could for example, notify users when they are entering information that Facebook considers public by placing an icon beside the text box. That way, users would be given immediate feedback about which information would be shared publicly.
Perhaps if Facebook decides to take a human factors approach to privacy in the future, Yohann will re-friend Facebook.
The Consumerist recently posted on something we haven’t tackled in our posts on electronic medical records: patient trust and privacy.
The California HealthCare Foundation recently released the results of a survey on electronic medical records and consumer behavior. The survey found that 15% of people would hide things from their doctor if the medical record system shared anonymous data with other organizations. Another 33% weren’t sure, but would consider hiding something.
When people are asked about accessing their personal health records (PHRs) online, they said: (from the report)
PHR Users Pay More Attention. More than half of PHR users have learned more about their health as a result of their PHR and one third of those say they used the PHR to take a specific action to improve their health.
Low-Income, Chronically Ill Benefit More from PHRs. Nearly 60% of PHR users with incomes below $50,000 feel more connected to their doctor as a result of their PHR, compared to 31% of higher income users. And four out of ten PHR users with multiple chronic conditions did something to improve their health, compared to 24% of others interviewed.
Doctors Are Most Trusted. About half of all survey respondents say they want to use PHRs provided by their physicians (58%) or insurers (50%). Just one in four (25%) reports wanting to use PHRs developed and marketed by private technology companies.
Privacy Remains a Concern. Sixty-eight percent of respondents are very or somewhat concerned about the privacy of their medical records, about the same number who were concerned in a 2005 CHCF survey. PHR users are less worried about the privacy of the information in their PHR.