I‘ve previously posted on the topic of tagging. As more products attempt to automate the process of creating tags from content, more problems are bound to appear like below. A pretty clear case of automation gone wrong!:
It wasn’t what anyone expected to see while perusing a news article. But there, in the final paragraph of an online story about the call girl involved in the Eliot Spitzer scandal, Yahoos automated system was inviting readers to browse through photos of underage girls.
Yahoo Shortcuts, which more frequently offers to help readers search for news and Web sites on topics like “California” or “President Bush,” had in this case highlighted the words “underage girls.” Readers who passed their mouse over the phrase in The Associated Press story were shown a pop-up window with an image from Flickr, Yahoos photo-sharing Web site.
I’ve heard a great deal about trust and automation over the years, but this has to be my favorite new example of over-reliance on a system.
GPS routed bus under bridge, company says
“The driver of the bus carrying the Garfield High School girls softball team that hit a brick and concrete footbridge was using a GPS navigation system that routed the tall bus under the 9-foot bridge, the charter company’s president said Thursday.Steve Abegg, president of Journey Lines in Lynnwood, said the off-the-shelf navigation unit had settings for car, motorcycle, bus or truck. Although the unit was set for a bus, it chose a route through the Washington Park Arboretum that did not provide enough clearance for the nearly 12-foot-high vehicle, Abegg said. The driver told police he did not see the flashing lights or yellow sign posting the bridge height.
“We haven’t really had serious problems with anything, but here it’s presented a problem that we didn’t consider,” Abegg said of the GPS unit. “We just thought it would be a safe route because, why else would they have a selection for a bus?””
Indeed, why WOULD “they” have a selection for a bus? Here is an excerpt from the manual (Disclosure: I am assuming it’s the same model):
“Calculate Routes for – Lets you take full advantage of the routing information built in the City Navigator maps. Some roads have vehicle-based restrictions. For example, a street or gate may be accessible by emergency vehicles only, or a residential street may not allow commercial trucking traffic. By specifying which vehicle type you are driving, you can avoid being routed through an area that is prohibited for your type of vehicle. Likewise, the ******** III may give you access to roads or turns that wouldn’t be available to normal traffic. The following options are available:
Truck (large semi-tractor/trailer
Emergency (ambulance, fire department, police, etc.)
Delivery (delivery vehicles)
Bicycle (avoids routing through interstates and major highways)
If we can assume no automation can be 100% reliable, at what point to people put too much trust in the system? At what point do they ignore the system in favor of more difficult methods, such as a paper map?At what point is a system so misleading that it should not be offered at all? Sanchez (2006) addressed this question and related type and timing of error to amount of trust placed in the automation. Trust declined sharply (for a time) after an error, so we may assume the Seattle driver might have re-checked the route manually had other (less catastrophic) errors occurred in the past.*
The spokesman for the GPS company is quoted in the above article as stating:
“Stoplights aren’t in our databases, either, but you’re still expected to stop for stoplights.”
I didn’t read the whole manual, but I’m pretty sure it doesn’t say the GPS would warn you of stoplights, a closer analogy to the actual feature that contributed to the accident. This is a time where an apology and a promise of re-design might serve the company better than blaming their users.
*Not a good strategy for preventing accidents!
Other sources for information on trust and reliability of automated systems:
There is an episode of the television show Seinfeld (“The Dealership“) where Kramer is test driving a car. During the test drive, Kramer notices the fuel gauge is empty and he wants to know how far he can drive before he really runs out of gas.
While I haven’t gone that far I like to see how fuel efficiently I can possibly drive. My car has a dynamic display of instant fuel economy in miles per gallon (my record is 37.5 MPG in a non-hybrid sedan).
Why do I do this? I don’t know–perhaps an innate competitiveness. But I know others who do this as well. Why not capitalize on this change in behavior by including more energy consumption displays in more products and even in the home. The image on the left is a new home energy monitor which tracks electricity, gas, and water usage.
Satellite navigation devices have been blamed for causing millions of pounds worth of damage to railway crossings and bridges. Network Rail claims 2,000 bridges are hit every year by lorries that have been directed along inappropriate roads for their size.
I guess it would be cost-prohibitive to put this bridge information into the GPS databases…
Interesting article on the use of automated decision aids on consumer devices. The researchers used a vacuum cleaner that indicated when an area needed more cleaning or not. They wanted to determine if users would use less energy if they were told that an area was clean. They found that energy consumption was not reduced. This is contrasted with some research (I can’t think of the exact citation) that showed that when users saw their household energy consumption, they tended to be more mindful about reducing their usage.
Abstract. This article presents two empirical studies (n = 30, n = 48) that are concerned with different forms of automation in interactive consumer products. The goal of the studies was to evaluate the effectiveness of two types of automation: perceptual augmentation (i.e. supporting users’ information acquisition and analysis); and control integration (i.e. supporting users’ action selection and implementation). Furthermore, the effectiveness of on-product information (i.e. labels attached to product) in supporting automation design was evaluated. The findings suggested greater benefits for automation in control integration than in perceptual augmentation alone, which may be partly due to the specific requirements of consumer product usage. If employed appropriately, on-product information can be a helpful means of information conveyance. The article discusses the implications of automation design in interactive consumer products while drawing on automation models from the work environment.
Wonder if he was distracted (being in a rental car) and was not able to pay attention to his surroundings while using GPS. Not much detail in the story.
BEDFORD HILLS, N.Y. – A Global Positioning System can tell a driver a lot of things — but apparently not when a train is coming. A computer consultant driving a rental car drove onto train tracks Wednesday using the instructions his GPS unit gave him. A train was barreling toward him, but he escaped in time and no one was injured.
The way it works is that the saw blade registers electrical contact with human skin and immediately stops. I can’t imagine not having this safety system in place, now that it is available. However, I still have some questions that commenters might want to weigh in on:
1. Unless the system is more redundant than an airplane, it must be able to fail. How do you keep users to remain vigilant when 99.999% of the time there is no penalty for carelessness?
2. To answer my own question, is the fear of a spinning blade strong enough to do that on its own? I know I’m not going to intentionally test the SawStop.
3. Can we use natural fears such as this in other areas of automation?
“When do we as adult woodworkers take responsibility and understand the dangers of woodworking. Most accidents happen due to not paying attention to what we’re doing. If we stay focused while we’re using power tools, or even hand tools, we eliminate accidents.”