As the first post in a series, we interview one the pioneers in the study of human-AI relationships, Dr. Julie Carpenter. She has over 15 years of experience in human-centered design and human-AI interaction research, teaching, and writing. Her principal research is about how culture influences human perception of AI and robotic systems and the associated human factors such as user trust and decision-making in human-robot cooperative interactions in natural use-case environments.
I chose a provocative title for this post after reading the report on what caused the wreck of the USS John McCain in August of 2017. A summary of the accident is that the USS John McCain was in high-traffic waters when they believed they lost control of steering the ship. Despite attempts to slow or maneuver, it was hit by another large vessel. The bodies of 10 sailors were eventually recovered and five others suffered injury.
Today marks the final report on the accident released by the Navy. After reading it, it seems to me the report blames the crew. Here are some quotes from the offical Naval report:
- Loss of situational awareness in response to mistakes in the operation of the JOHN S MCCAIN’s steering and propulsion system, while in the presence of a high density of maritime traffic
- Failure to follow the International Nautical Rules of the Road, a system of rules to govern the maneuvering of vessels when risk of collision is present
- Watchstanders operating the JOHN S MCCAIN’s steering and propulsion systems had insufficient proficiency and knowledge of the systems
And a rather devestating:
In the Navy, the responsibility of the Commanding Officer for his or her ship is absolute. Many of the decisions made that led to this incident were the result of poor judgment and decision making of the Commanding Officer. That said, no single person bears full responsibility for this incident. The crew was unprepared for the situation in which they found themselves through a lack of preparation, ineffective command and control and deficiencies in training and preparations for navigation.
Ars Technica called my attention to an important but not specifically called out reason for the accident: the poor feedback design of the control system. I think it is a problem that the report focused on “failures” of the people involved, not the design of the machines and systems they used. After my reading, I would summarize the reason for the accident as “The ship could be controlled from many locations. This control was transferred using a computer interface. That interface did not give sufficient information about its current state or feedback about what station controlled what functions of the ship. This made the crew think they had lost steering control when actually that control had just been moved to another location.” I based this on information from the report, including:
Steering was never physically lost. Rather, it had been shifted to a different control station and watchstanders failed to recognize this configuration. Complicating this, the steering control transfer to the Lee Helm caused the rudder to go amidships (centerline). Since the Helmsman had been steering 1-4 degrees of right rudder to maintain course before the transfer, the amidships rudder deviated the ship’s course to the left.
Even this section calls out the “failure to recognize this configuration.” If the system is designed well, one shouldn’t have to expend any cognitive or physical resources to know from where the ship is being controlled.
Overall I was surprised at the tone of this report regarding crew performance. Perhaps some is deserved, but without a hard look at the systems the crew use, I don’t have much faith we can avoid future accidents. Fitts and Jones were the start of the human factors field in 1947, when they insisted that the design of the cockpit created accident-prone situations. This went against the beliefs of the times, which was that “pilot error” was the main factor. This ushered in a new era, one where we try to improve the systems people must use as well as their training and decision making. The picture below is of the interface of the USS John S McCain, commissioned in 1994. I would be very interested to see how it appears in action.
The interface demonstration starts at the 1 minute mark if you would like to skip the advertisement.*
*I’m not sure if it counts as an advertisement when most people aren’t allowed to bank there. You must have a connection to the military to use USAA (hence the aircraft carrier example in the video).
I heard an interview over the weekend on the use of robots in war. A fascinating bit from that story was that as modern warfare moves to soldiers manipulating robots from afar, the military leveraged the existing research and development of game companies in the design of hand held controllers (at 13:43 in the streaming interview).
So the predator drone remote control probably looks very similar to your PS3 or Xbox controller! It makes sense as both tasks are nearly identical (as far as I can tell).
It was also interesting that the 19-year old Army specialist who trains remote operators honed his skill by playing video games (at around 14:30 in the interview).
Related, the Designing for Humans blog has a great, multi-part series on the ergonomics for interaction designers. Part 3 is very relevant to hand held controller design.
I went on a trip to Tucson over the holidays and toured the last Titan II missile silo. A brief history: from 1963-1982 these missiles were part of the cold war “peace through deterrance” and “assured mutual destruction.” In essence, they provided one reason not to attack the US: even were we destroyed, these missiles would still launch to destroy the Soviet Union.
Politics aside, the control room and interfaces for these missiles were fascinating from a human factors perspective. Gauges, buttons, and rotary inputs reside where we now would expect screens and keyboards. I reflected on this while there: though you need a button for each function, at least the interface never changes.
I snapped the picture below as an example of users improving a system. It appears they are trying to reduce their memory demands by listing on labels the upper and lower boundaries of these controls. It reminded me of the draft beer handles added to the levers in a nuclear power plant (as discussed by Don Norman in “The Design of Everyday Things.“)
A little more history: The Titan sites do not have a perfect safety record. With 54 sites operating for almost 40 years, there were 4 recorded accidents, all where lives were lost and one early fire where 53 people died. Fortunately, none of these accidents resulted in a nuclear explosion, not even in one where the nuclear piece of the missile was blown out of the silo. This site provides a list and engineering analysis of the accidents, and I would be interested in a human factors analysis.
In the accident that ejected the nuclear warhead, the commonly reported story says the explosion occurred when the missile was being serviced and a repairman dropped a heavy tool on the fuel tank. This implies the explosion was instant, however it actually occurred over 8 hours later, as the fuel exited the breech. The best description I could find comes from a newspaper, the Arkansas Leader:
The missile was housed in a silo within a silo that consisted of eight levels. Maintenance crews were working on level two when the accident happened. Attached to the hydraulic standing platforms was a rubberized boot that flipped over between the missile and the platform to prevent anything from falling through if dropped.
The day missile 374-7 exploded, the boot didn’t keep the socket from falling. At 6:30 p.m., maintenance crews entered the silo to begin work after being delayed due to various unrelated equipment malfunctions. The eight- and three-quarter-pound socket fell, hit the standing platform and bounced toward the missile.
The boot had become too pliable through the years, and the socket fell 70 feet down the silo, hit the thrust mount and bounced into the side of the stage one fuel tank. The 100,000-gallon fuel tank emptied into the bottom of the silo. The fuels interacted and generated heat, which in turn increased the pressure on the tanks. At 8 p.m., the wing made the decision to evacuate the control center.
“When we did that, we had no readings and no way of telling what was going on out there,” Gray said. “We lost all readings,” Gray added.
Many attempts were made to get into the control center to see the readings, according to Gray. At 3 a.m., two people, Living-ston and Sgt. Jack Kennedy, made it into the complex. “When they made it in and had to back out because the fuel was so concentrated they couldn’t see, there was some controversy on who told them to turn on exhaust fan 105,” Gray said.
What that did, according to Gray, was pull the heavy concentration of fuel into the equipment area with all the electrical pumps.
“And automatically, boom!” Gray said. “The fire flashed back into the silo, which already had tremendous heat in there, and when the fire flashed back, the stage one oxidizer tank that was already very, very high in pressure, erupted.”
Within one hour of the accident, Gray found the nuclear warhead intact. “It was cracked, but it pegged out on the radio-activity scanner,” Gray said.
Lessons learned from this accident brought about security improvements near nuclear weapons. Security measures to prevent accidents include: all workers wearing a belt with lanyards to attach tools to, a cloth on the platform to reduce the chance of tools bouncing off the platform if they do fall and a renovation of the platforms.
One of our tour guides had actually been stationed at the silo. He was a great guide and a living piece of history. Consistent with what you might expect, he said the hardest times to keep the missile running and protected were the down times, hours of vigilance and inactivity.
Last, I also photographed some of the operation manuals at the museum. Apologies for the fuzziness of these pictures, and I’ll re-type the best bits:
7. Key Run Up procedure, if required. (figure 3-26C)….Performed
Reference SACR 100-24, Volume VI to determine if key run up is required.
Step 8 can only be performed when SYNC indicator is lighted in NORM modes or TRACK/TRSHD indicators are lighted in SPCL modes.
8. DEMODULATOR CONTROL PRINT MODE thumbwheel switch ….. TEST
Observe printout on teleprinter. Printout is continuous series of characters RY’s or 64’s if transmitter site is transmitting idle message, or normal message traffic.
9. DEMODULATOR CONTROL PRINT MODE thumbwheel switch …Set as directed
*PVD must be continuously monitored visually or aurally.
*The PVD may be monitored by either a team in the silo or a crew member in the control center utilizing the wire type maintenance net.
For entry into launch duct level one, the PFC will be positioned outside of the opened level two launch duct access door, with sufficient probes to reach in the launch duct unless the PVD is required on level one of the launch duct for a sniff check.
Generally, I notice a large number of if/then/or/only types of commands.
I have only one last thing to say: the fact that Tucson, AZ, Damascus, AK and Wichita, KS are still around is a testament to the power of training and practice over our human frailties.
The U.S. military has been using games for decades to train its troops. Now, for the first time, the Army has set up a project office, just for building and deploying games.
No, the Army isn’t about to start handing out copies of Halo 3 to troops, TSJOnline.com notes. “I haven’t seen a game built for the entertainment industry that fills a training gap,” said Col. Jack Millar, director of the service’s Training and Doctrine Command’s (TRADOC) Project Office for Gaming, or TPO Gaming. Instead, the new office — part of the Army’s Kansas-based National Simulation Center — will focus on using videogame graphics to make those dull military simulations more realistic, and better-looking.
Enjoy this video of expert team performance. I note that the post-er says these Marines “cut a lot of corners.” I’d be very interested to know how this differs from what they “should” be doing and what is optimal.
This from comments on the video: “Chief, what are you doing?! That was one jacked up fire mission. Are you trying to get your guys killed in a training mission? Not swabbing the breach or checking the bore while firing slow burning greenbag?! And what are you doing in the way between the trails? Didn’t do FCATS, did you? Don’t trust the quadrant on the gunner’s side?”
Though for some, turning war into a video game might remind them of 1984, unmanned aircraft offer unparalleled safety to the pilot.
Obvious issues include:
- Lag time from the camera halfway around the world
- Limited acuity and field of view
- Decision-making (e.g., bombing a target on a screen vs. dropping a bomb on people)
- High loss of equipment (if not pilot life)
In short, I worry that the news presented to the public paints a too-rosy picture of these aircraft, implying that we will eventually have robots fighting robots from the comfort of our own homes.
I’d like to hear from people what they consider to be the most interesting human factors challenge of unmanned vehicles. I don’t know much about the design of their interfaces and whether they are more similar to a cockpit or a game console, but I’m interested to learn. Feel free to comment!