Category Archives: training

Wiener’s Laws

The article “The Human Factor” in Vanity Fair is two years old, but since I can’t believe I missed posting it — here it is! It’s a riveting read with details of the Air France Flight 447 accident and intelligent discussion of the impact automation has on human performance. Dr. Nadine Sarter is interviewed and I learned of a list of flight-specific “laws” developed by Dr. Earl Wiener, a past-president of HFES.

“Wiener’s Laws,” from the article and from Aviation Week:

  • Every device creates its own opportunity for human error.
  • Exotic devices create exotic problems.
  • Digital devices tune out small errors while creating opportunities for large errors.
  • Invention is the mother of necessity.
  • Some problems have no solution.
  • It takes an airplane to bring out the worst in a pilot.
  • Whenever you solve a problem, you usually create one. You can only hope that the one you created is less critical than the one you eliminated.
  • You can never be too rich or too thin (Duchess of Windsor) or too careful about what you put into a digital flight-guidance system (Wiener).
  • Complacency? Don’t worry about it.
  • In aviation, there is no problem so great or so complex that it cannot be blamed on the pilot.
  • There is no simple solution out there waiting to be discovered, so don’t waste your time searching for it.
  • If at first you don’t succeed… try a new system or a different approach.
  • In God we trust. Everything else must be brought into your scan.
  • It takes an airplane to bring out the worst in a pilot.
  • Any pilot who can be replaced by a computer should be.
  • Today’s nifty, voluntary system is tomorrow’s F.A.R.

Kudos to the author, William Langewiesche, for a well researched and well written piece.

Anne & Rich Interviewed about Human Factors

Anne and I are big proponents of making sure the world knows what human factors is all about (hence the blog).  Both of us were recently interviewed separately about human factors in general as well as our research areas.

The tone is very general and may give lay people a good sense of the breadth of human factors.  Plus, you can hear how we sound!

First, Anne was just interviewed for the radio show “Radio In Vivo“.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Late last year, I was interviewed about human factors and my research on the local public radio program Your Day:

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Excerpts from the NASA ASRS

One of my students last semester (thanks, Ronney!) turned me on the “Callback” publication from the NASA Aviation Safety Reporting System. These are almost all first person stories written as case studies of errors and accidents or near accidents. There aren’t so many that it falls under my list of neat databases, but it certainly is interesting reading.

I’ve collected a few below to give a taste of the stories that are included. These are just the top level descriptions – click through to read the first person accounts.

From Issue 381 Upside Down and Backwards

  1. “An aircraft Mode Selector Panel that “looks the same” whether right side up or upside down, and that can be readily installed either way, is a good example of a problematic design. Confronted with an inverted panel, this Cessna 560 Captain found out what happens when the wrong button is in the right place. “
  2. “Without detailed instructions and clear notation, nearly symmetrical parts can be installed incorrectly. Faced with the replacement of such a part, this CRJ 700 Maintenance Technician wound up with a case of component “misorientation.”

From Issue 383 When Practice Emergencies Go Bad

  1. “…a C182 pilot performed a simulated engine failure while undergoing a practical examination. It appears that both the examiner and the examinee were so engrossed in the simulated emergency that they both tuned BEEEEP out BEEEEP the BEEEEP gear BEEEEP warning BEEEEP horn.”
  2. “When faced with a real engine failure, performing the Engine Secure Checklist reduces the chance of a fire on landing. However, actually performing the steps in the Engine Secure Checklist when the engine failure is not real can lead to a real problem.”

From Issue 382 Fly the Airplane!

  1. “A review of recent ASRS reports indicates that failure to follow one of the most basic tenets of flight continues to be a concern when pilots are faced with distractions or abnormal situations.”

From Issue 376 The Fixation Factor

  1. “The ability to maintain the “big picture” while completing individual, discrete tasks is one of the most critical aspects of working in the aviation environment. Preoccupation with one particular task can degrade the ability to detect other important information. This month’s CALLBACK looks at examples of how fixation adversely affects overall task management.”
  2. “Advanced navigation equipment can provide a wealth of readily available information, but as this Cirrus SR20 pilot learned, sometimes too much information can be a distraction.”

From Issue 375 Motor Skills: Getting Off to a Good Start

  1. “The Captain of an air carrier jet experienced a very hot start when distractions and failure to follow normal flow patterns altered the engine start sequence.”
  2. “This pilot was familiar with the proper procedures for hand-propping, but despite a conscientious effort, one critical assumption led to a nose-to-nose encounter.”

Photo credit smartjunco @ Flickr

 

 

 

Humans and Automation on the Colbert Report

Look! A human factors colleague on the Colbert Report! Does this mean we’re cool?

Dr. Missy Cummings, Associate Professor at MIT
Director of the Humans and Automation Lab

Verdict Reached for Air France Rio Crash

The BBC has reported the incident analysis of the Air France crash that killed 228 people was due to lack of pilot skill in dealing with a high altitude stall.

Here is a link to the BEA Report from the Bureau d’Enquetes et d’Analyses. It’s a frightening read, as they give a moment by moment analysis of the last minutes in the cockpit. No emergency was ever noted and there did not appear to be any mechanical failures. It appeared that the flight crew thought events were under control the entire time (despite the alarms.)

 

 

Photo credit Vin Crosbie at Flickr.

For projectors, new technology means new training (and new errors!)

Mode errors! Coming soon to a theater near you? Have you ever forgotten to set your camera back to Auto from Portrait? How about not understanding what those modes mean? Apparently a similar phenomenon occurs in the professional world of movie theaters. There is a special lens filter used for 3-D movies and when it is not removed for normal movies, the brightness of the film suffers. See the story below for details.

A movie lover’s plea: Let there be light: Many theaters misuse 3-D lenses to show 2-D films, squandering brightness, color

So why aren’t theater personnel simply removing the 3-D lenses? The answer is that it takes time, it costs money, and it requires technical know-how above the level of the average multiplex employee. James Bond, a Chicago-based projection guru who serves as technical expert for Roger Ebert’s Ebertfest, said issues with the Sonys are more than mechanical. Opening the projector alone involves security clearances and Internet passwords, “and if you don’t do it right, the machine will shut down on you.’’ The result, in his view, is that often the lens change isn’t made and “audiences are getting shortchanged.’’

I think “and if you don’t do it right, the machine will shut down on you” summed it up nicely!

Photo credit PatrickBeeson at Flickr.

Radiation: The Difficulty of Monitoring the Invisible – Post 2 of 2

This post continues the list of articles on HF-related errors in radiation delivering healthcare devices.

As Technology Surges, Radiation Safeguards Lag

But the technology introduces its own risks: it has created new avenues for error in software and operation, and those mistakes can be more difficult to detect. As a result, a single error that becomes embedded in a treatment plan can be repeated in multiple radiation sessions.

A new linear accelerator had been set up incorrectly, and the hospital’s routine checks could not detect the error because they merely confirmed that the output had not changed from the first day.

In another case, an unnamed medical facility told federal officials in 2008 that Philips Healthcare made treatment planning software with an obscure, automatic default setting, causing a patient with tonsil cancer to be mistakenly irradiated 31 times in the optic nerve. “The default occurred without the knowledge of the physician or techs,” the facility said, according to F.D.A. records.

In a statement, Peter Reimer of Philips Healthcare said its software functioned as intended and that operator error caused the mistake.

Radiation Offers New Cures, and Ways to Do Harm

The Times found that while this new technology allows doctors to more accurately attack tumors and reduce certain mistakes, its complexity has created new avenues for error — through software flaws, faulty programming, poor safety procedures or inadequate staffing and training.

X-Rays and Unshielded Infants

Asked about the case, Dr. David Keys, a board member of the American College of Medical Physics, said, “It takes less than 15 seconds to collimate [cover non-scanned portions of the body – AM] a baby,” adding: “It could be that the techs at Downstate were too busy. It could be that they were just sloppy or maybe they forgot their training.”

Other problems, according to Dr. Amodio’s e-mail, included using the wrong setting on a radiological device, which caused some premature babies to be “significantly overirradiated.”

Accidental Activation During Seat Adjustment on Plane

CNN posted this story where a co-pilot accidentally bumped a control while adjusting his seat, sending the plane into a 26 degree dive. Disaster was only averted by the pilot returning from the restroom, as apparently the co-pilot lacked the training to correct the error. From the article:

The aviation agency report concluded that the 25-year-old co-pilot had not been trained in the specific scenario the jet encountered and “probably had no clue to tackle this kind of emergency.”

Fortunately disaster was averted, as this story seems to have all the elements Reason’s Swiss Cheese model of accidents requires:

  • Organizational influences – training, and perhaps design of controls
  • Unsafe supervision – temporary absence of supervision
  • Preconditions for unsafe acts – inadequate experience for the complexity of the situation
  • Unsafe acts – Slip, mis-activation of a control

(These are, of course, guesses based on a short news article — I don’t pretend to know everything about this accident.)

Photo Credit grey_um

Facebook and Privacy: A Guest Post by Kelly Caine

Many of my friends have threatened to leave Facebook because of their concerns over privacy, but for the first time, this week one of them actually made good on the threat.

In his “Dear John” letter, my friend Yohann summarized the issue:

I don’t feel that I am in control of the information I share on Facebook, and of the information my friends share… FB has total control of (some of) my information, and I don’t like that.

It’s not that Yohann didn’t like Facebook–he did. He liked being able to see his friend’s latest photos and keep up with status updates. The problem was that Yohann (who is, by the way a very smart, tech savvy guy) felt unable to use the Facebook user interface to effectively maintain control of his information.

The root of this problem could be one of two things. It could be that Facebook has adopted the “evil interface” strategy (discussed by Rich previously on the human factors blog), where an interface is not designed to help a user accomplish their goals easily (a key tenet of human factors), but is instead designed to encourage (or trick) a user to behave the way the interface designer wants the user to behave (even if it’s not what the user really wants). Clearly, this strategy is problematic for a number of reasons, not the least of which from Facebook’s perspective is that users will stop using Facebook altogether if they feel tricked or not in control.

A more optimistic perspective is that the problem of privacy on Facebook is a human factors one: the privacy settings on Facebook need to be redesigned because they are currently not easy to use.  Here are a few human factors issues I’ve noticed.

Changes to Privacy Policy Violate Users’ Expectations

Facebook’s privacy policies have changed drastically over the years (The EFF provides a good description of the changes and Matt McKeon has made a very nice visualization of the changes).

Users, especially expert users, had likely already developed expectations about what profile information would be shared with whom. Each time Facebook changed the privacy policy (historically, always in the direction of sharing more), users had to exert effort to reformulate their understanding of what was shared by default, and work to understand how to keep certain information from being made more widely available.

Lack of Feedback

In general, there is very little feedback provided to users about the privacy level of different pieces of information on their Facebook profile. For example, by default, Facebook now considers your name, profile picture, gender, current city, networks, friend list, and Pages to all be public information. However, no feedback is given to users as they enter or change this information to indicate that this is considered public information.

It is unclear what is public and non-public information

While Facebook did introduce a preview function which shows a preview of what information a Facebook friend would see should they visit your profile (which is a great idea!), the preview function does not provide feedback to a user about what information they are sharing publicly or with apps. For example, you can’t type “Yelp” into the preview window to see what information Facebook would share with Yelp through Facebook connect.

You cannot preview what information Facebook shares with sites and apps

No Training (Instructions)

Finally, Facebook does not provide any training and only minimal instructions for users on how to manage privacy settings.

Solutions

Fortunately, there are some relatively simple human factors solutions that could help users manage their privacy without writing their own Dear John letter to Facebook.

In terms of user expectations, given the most recent changes to Facebook’s privacy policy, it’s hard to imagine how much more the Facebook privacy policy can change. So, from an expectations standpoint, I guess that could be considered good?

In terms of interface changes to increase feedback to users, Facebook could for example, notify users when they are entering information that Facebook considers public by placing an icon beside the text box. That way, users would be given immediate feedback about which information would be shared publicly.

Globe icon indicates shared information

Finally, in terms of training, it’s fortunate that a number of people outside of Facebook have already stepped up to provide users instructions on how to use Facebook’s privacy settings. For example, in a post that dominated the NYT “most emailed” for over a month Sarah Perez explained the 3 Facebook settings she though every user should know after Facebook made sweeping changes to their privacy policy that dramatically increased the amount of information from a profile that is shared publicly. Then, after the most recent changes (in April 2010) Gina Trapani at Fast Company provided easy to use instructions complete with screen shots.

Perhaps if Facebook decides to take a human factors approach to privacy in the future, Yohann will re-friend Facebook.

Kelly Caine PhD is a research fellow in the School of Informatics and Computing at Indiana University. Her primary research interests include privacy, health technology, human factors, hci, aging, and designing for special populations.

(post image from Flickr user hyku)

Human Factors Blog @ SXSW

Anne was invited to be a panelist at SXSW on Friday, March 12 at 05:00 PM.  SXSW is a yearly music, movie, and interactive media festival held in Austin, TX.  The title of the interactive panel is With Great Power Comes Great Responsibility: The Future of Video Games. Here is a description:

Video games are more popular than ever, and new games are delivering all kinds of social benefits, from video-game therapy for treating PTSD, to sims for train surgeons, to alternate-reality games that actually bring people together in real life. Will video games be a positive force for people and society in the future (as they arguably are today)? This panel is co-sponsored by Discover Magazine and the National Science Foundation.

Take a look at the event page for more information on the other panelists.  If you happen to be there, drop by and say hello.

She promises to document as much HF-relevant aspects of the conference as possible.  Here are just some of the talks she’s planning on attending:

  • History of the button
  • Long distance UX
  • Is the brain the ultimate computer interface?
  • mind control: psychology for the web
  • what guys are doing to get more girls in tech social gaming: lessons from the pioneers
  • products vs users: who’s winning
  • games for good