Category Archives: trust

Radiation: The Difficulty of Monitoring the Invisible – Post 2 of 2

This post continues the list of articles on HF-related errors in radiation delivering healthcare devices.

As Technology Surges, Radiation Safeguards Lag

But the technology introduces its own risks: it has created new avenues for error in software and operation, and those mistakes can be more difficult to detect. As a result, a single error that becomes embedded in a treatment plan can be repeated in multiple radiation sessions.

A new linear accelerator had been set up incorrectly, and the hospital’s routine checks could not detect the error because they merely confirmed that the output had not changed from the first day.

In another case, an unnamed medical facility told federal officials in 2008 that Philips Healthcare made treatment planning software with an obscure, automatic default setting, causing a patient with tonsil cancer to be mistakenly irradiated 31 times in the optic nerve. “The default occurred without the knowledge of the physician or techs,” the facility said, according to F.D.A. records.

In a statement, Peter Reimer of Philips Healthcare said its software functioned as intended and that operator error caused the mistake.

Radiation Offers New Cures, and Ways to Do Harm

The Times found that while this new technology allows doctors to more accurately attack tumors and reduce certain mistakes, its complexity has created new avenues for error — through software flaws, faulty programming, poor safety procedures or inadequate staffing and training.

X-Rays and Unshielded Infants

Asked about the case, Dr. David Keys, a board member of the American College of Medical Physics, said, “It takes less than 15 seconds to collimate [cover non-scanned portions of the body – AM] a baby,” adding: “It could be that the techs at Downstate were too busy. It could be that they were just sloppy or maybe they forgot their training.”

Other problems, according to Dr. Amodio’s e-mail, included using the wrong setting on a radiological device, which caused some premature babies to be “significantly overirradiated.”

Automation Issues Hit the Big Time on NPR

NPR brings home the safety issues of too much cockpit automation.

From the NPR story:

“It was a fairly busy time of the day. A lot of other airliners were arriving at the same time, which means that air traffic control needed each of us on very specific routes at specific altitudes, very specific air speeds, in order to maintain this smooth flow of traffic,” he says.

So, air traffic control told him to fly a particular course. He and the other pilot flying the jet set the flight automation system to do it.

“What I anticipated the aircraft to do was to continue this descent,” he says. “Well instead, the aircraft immediately pitched up, very abruptly and much to my surprise. And both of us reached for the yoke, going, ‘What’s it doing?’ and there’s that shock of, ‘Why did it do that, what’s it going to do next?’ “

We’ve posted on this topic before, when we discussed Dr. Sethmudnaven’s work and Dr. Sanchez’s work. For more cutting-edge automation failure research, watch these labs:

If your lab should be listed and isn’t, send me an email!

Blogging APA Division 21: The Cost of Automation Failure

Arathi Sethumadhavan, currently of Medtronic and recently of Texas Tech, was this year’s winner of the George E. Briggs dissertation award, for the best dissertation this year in the field of applied experimental psychology. Her advisor was Frank Durso.

Her work was inspired by our need to increase automation in aviation, due to increases in air traffic. However, automation does not come without costs — what happens the performance of air traffic controllers and pilots when the automation someday fails? At what point is the operator so “out of the loop” that recovery is impossible?

Sethumadhavan addressed this question by giving people different levels of automation and observing their performance after failures of the automated system. The more automated the system, the more errors occurred when that system failed.

She also measured the situation awareness of her participants in the different levels of automation — results were similar. Those who had more of their task automated had less situation awareness, and even after a system failure their awareness continued to be lower. In other words, they weren’t shocked out of complacency, as one might predict.

Sethumadhaven’s work directly contributes to understanding the human in the loop of the automated system, so that we can predict their behavior and explore design options to prevent errors due to putting the controller out of the loop.

You can read more on Dr. Sethumadhavan’s work here. Congratulations to her on this award!

Photo credit isafmedia under a Creative Commons license.

The Zero-Fatality Car

I ran across this fascinating article from ComputerWorld on Volvo’s goal of creating a zero fatality car by 2020.

As I read it, a number of human factors issues jumped out at me, but the focus is almost entirely on engineering issues. This does not mean Volvo will ignore the human factor. After all, I’ve previously posted on their well-done instrument panels. However, it would be fun to read about how they are including human reactions, expectations, and limitations in this work.

The focus on engineering solutions is typical in discussions of safety. Yes, it’s preferable to design out the hazard if you can, but the article even points out that “Another challenge is that wireless signals can be unreliable in moving vehicles” and “Of course, a looming challenge for cars that rely on computers for their safety is that computers are not 100% reliable,” which they would address by “warn(ing) the driver if it’s not working properly.” Sounds like some research on trusting automation might be helpful!

My favorite quote was “No amount of vehicle-to-vehicle communication will help when drivers make monumental mistakes, such as driving into a tree.” Since people do not often choose to drive into trees I think it would be useful to understand why they might make such a “monumental” mistake. Perhaps swerving to avoid a child in the road? Would the system disallow such a swerve to keep the driver safe, keeping the car on the original path?

We, the drivers, will still have to interact with our zero-fatality car to keep fatalities at zero and I hope we will be heavily included in this work beyond our heights, weights, and how much impact it takes to fracture our skulls.

Photo credit sidkid under a creative commons license.

Usability Potpourri

HF/Usability Potpourri returns with two recent items.

iPhone Reception Display

Reports from some sites suggest that at least some of the cellular reception issues of the new iPhone 4 are due to improper display of signal strength.  This is a neat HF issue because it involves user’s trust in automation (the display of reception bars is actually a computed value, not a raw meter of actual signal strength), the design of information displays, and properly informing the user so they can set expectations.  Apple is planning to tweak the way in which those bars get calculated (presumably to be less optimistic) to bring user expectations in-line with reality.

From an Apple press release:

Upon investigation, we were stunned to find that the formula we use to calculate how many bars of signal strength to display is totally wrong. Our formula, in many instances, mistakenly displays 2 more bars than it should for a given signal strength. For example, we sometimes display 4 bars when we should be displaying as few as 2 bars.

Mozilla Browser Visualization

Next, Mozilla, creators of Firefox, present some interesting visualizations of what users are clicking in Firefox.  As expected, the back button is one of the most frequently clicked items (93% of all users).

Interestingly, the RSS icon in the location bar (the orange square icon used to subscribe to blogs) showed some operating system differences.  Five percent of PC/Windows users clicked it, 11% of Mac users, and about 14% of Linux users.  Indicative of experiential differences?  PC users less aware of blogs/blog readers?

Our own analytics show that the vast majority of our readers visit from PC-based Firefox installations.  As a service to our readers, here is the subscribe link to our blog 🙂

Facebook and Privacy: A Guest Post by Kelly Caine

Many of my friends have threatened to leave Facebook because of their concerns over privacy, but for the first time, this week one of them actually made good on the threat.

In his “Dear John” letter, my friend Yohann summarized the issue:

I don’t feel that I am in control of the information I share on Facebook, and of the information my friends share… FB has total control of (some of) my information, and I don’t like that.

It’s not that Yohann didn’t like Facebook–he did. He liked being able to see his friend’s latest photos and keep up with status updates. The problem was that Yohann (who is, by the way a very smart, tech savvy guy) felt unable to use the Facebook user interface to effectively maintain control of his information.

The root of this problem could be one of two things. It could be that Facebook has adopted the “evil interface” strategy (discussed by Rich previously on the human factors blog), where an interface is not designed to help a user accomplish their goals easily (a key tenet of human factors), but is instead designed to encourage (or trick) a user to behave the way the interface designer wants the user to behave (even if it’s not what the user really wants). Clearly, this strategy is problematic for a number of reasons, not the least of which from Facebook’s perspective is that users will stop using Facebook altogether if they feel tricked or not in control.

A more optimistic perspective is that the problem of privacy on Facebook is a human factors one: the privacy settings on Facebook need to be redesigned because they are currently not easy to use.  Here are a few human factors issues I’ve noticed.

Changes to Privacy Policy Violate Users’ Expectations

Facebook’s privacy policies have changed drastically over the years (The EFF provides a good description of the changes and Matt McKeon has made a very nice visualization of the changes).

Users, especially expert users, had likely already developed expectations about what profile information would be shared with whom. Each time Facebook changed the privacy policy (historically, always in the direction of sharing more), users had to exert effort to reformulate their understanding of what was shared by default, and work to understand how to keep certain information from being made more widely available.

Lack of Feedback

In general, there is very little feedback provided to users about the privacy level of different pieces of information on their Facebook profile. For example, by default, Facebook now considers your name, profile picture, gender, current city, networks, friend list, and Pages to all be public information. However, no feedback is given to users as they enter or change this information to indicate that this is considered public information.

It is unclear what is public and non-public information

While Facebook did introduce a preview function which shows a preview of what information a Facebook friend would see should they visit your profile (which is a great idea!), the preview function does not provide feedback to a user about what information they are sharing publicly or with apps. For example, you can’t type “Yelp” into the preview window to see what information Facebook would share with Yelp through Facebook connect.

You cannot preview what information Facebook shares with sites and apps

No Training (Instructions)

Finally, Facebook does not provide any training and only minimal instructions for users on how to manage privacy settings.

Solutions

Fortunately, there are some relatively simple human factors solutions that could help users manage their privacy without writing their own Dear John letter to Facebook.

In terms of user expectations, given the most recent changes to Facebook’s privacy policy, it’s hard to imagine how much more the Facebook privacy policy can change. So, from an expectations standpoint, I guess that could be considered good?

In terms of interface changes to increase feedback to users, Facebook could for example, notify users when they are entering information that Facebook considers public by placing an icon beside the text box. That way, users would be given immediate feedback about which information would be shared publicly.

Globe icon indicates shared information

Finally, in terms of training, it’s fortunate that a number of people outside of Facebook have already stepped up to provide users instructions on how to use Facebook’s privacy settings. For example, in a post that dominated the NYT “most emailed” for over a month Sarah Perez explained the 3 Facebook settings she though every user should know after Facebook made sweeping changes to their privacy policy that dramatically increased the amount of information from a profile that is shared publicly. Then, after the most recent changes (in April 2010) Gina Trapani at Fast Company provided easy to use instructions complete with screen shots.

Perhaps if Facebook decides to take a human factors approach to privacy in the future, Yohann will re-friend Facebook.

Kelly Caine PhD is a research fellow in the School of Informatics and Computing at Indiana University. Her primary research interests include privacy, health technology, human factors, hci, aging, and designing for special populations.

(post image from Flickr user hyku)

Trust & Electronic Medical Records

The Consumerist recently posted on something we haven’t tackled in our posts on electronic medical records: patient trust and privacy.

The California HealthCare Foundation recently released the results of a survey on electronic medical records and consumer behavior. The survey found that 15% of people would hide things from their doctor if the medical record system shared anonymous data with other organizations. Another 33% weren’t sure, but would consider hiding something.

Interestingly, the data come from a report titled “New National Survey Finds Personal Health Records Motivate Consumers to Improve Their Health.”

When people are asked about accessing their personal health records (PHRs) online, they said: (from the report)

  • PHR Users Pay More Attention. More than half of PHR users have learned more about their health as a result of their PHR and one third of those say they used the PHR to take a specific action to improve their health.
  • Low-Income, Chronically Ill Benefit More from PHRs. Nearly 60% of PHR users with incomes below $50,000 feel more connected to their doctor as a result of their PHR, compared to 31% of higher income users. And four out of ten PHR users with multiple chronic conditions did something to improve their health, compared to 24% of others interviewed.
  • Doctors Are Most Trusted. About half of all survey respondents say they want to use PHRs provided by their physicians (58%) or insurers (50%). Just one in four (25%) reports wanting to use PHRs developed and marketed by private technology companies.
  • Privacy Remains a Concern. Sixty-eight percent of respondents are very or somewhat concerned about the privacy of their medical records, about the same number who were concerned in a 2005 CHCF survey. PHR users are less worried about the privacy of the information in their PHR.