Here is a neat vision of what 2019 will be like courtesy Microsoft Office Labs. This concept video was produced by Microsoft and shown at the Wharton Business Technology Conference. Two things that caught my attention were the prodigious use of touch interface and gestures (which I am not crazy about; my finger/hands get tired using my iPod touch to make exaggerated moves), and the importance of information visualization.
Data is being displayed and interacted with in creative ways in the following examples. Video is after the images below:
We’vespokenbefore about the role of human factors in energy conservation. It looks like Google is taking a big step toward raising awareness of home energy usage from your desktop. With the installation of home energy meters, you may soon be able to track your own power usage:
Google PowerMeter, now in prototype, will receive information from utility smart meters and energy management devices and provide anyone who signs up access to her home electricity consumption right on her iGoogle homepage.
They have taken a similar approach to your health maintenance with Google Health by incorporating actual health data from sensors, doctors, pharmacies and showing you the data.
With all of this sensor aggregation comes issues of automation (again), the best ways to present data/visualization, ease interpretation, as well as issues of technology acceptance (e.g., privacy versus utility). It looks like we human factors people will be busy for a long time, thanks Google!
Recently, an Advanced Driver Assistance System (ADAS) hit the news in Europe. I’ve always been interested in advanced navigation systems (and their problems), so I check in on some of the research programs occasionally. After all, individual differences from culture to aging all affect how we use navigation systems.
The original article I mentioned briefly addresses the errors these systems may cause:
Drivers’ uncritical reliance on their sat navs has led to a growing number of mishaps. Last year a woman wrecked her £96,000 Mercedes SL500 trying to drive across a swollen ford through the River Sence in Sheepy Magna, Leicestershire, after her sat nav told her it was a passable route.
…but spent most of the time discussing the errors they catch.
In addition to instructions on when to slow down or change gear for the best fuel economy, motorists will also be warned when they are driving erratically and will even be told at the end of the journey if they have caused undue stress to parts of the car.
Of course, getting to the end of the journey may be more difficult using the current navigation systems. This finding comes from Ziefle, Pappachan, Jakobs and Wallentowitz (2008) who gave an ADAS to older drivers to compensate for age-related perceptual declines. They compared younger and older drivers using either audio or visual aids:
When no assistance was present, driving performance was superior than in both assistance conditions. The visual interface had a lower detrimental effect than the auditory ADAS which had the strongest distracting effect. In contrast to performance outcomes, the auditory interface was rated as more helpful by older drivers compared to the visual interface.
I‘ve previously posted on the topic of tagging. As more products attempt to automate the process of creating tags from content, more problems are bound to appear like below. A pretty clear case of automation gone wrong!:
It wasn’t what anyone expected to see while perusing a news article. But there, in the final paragraph of an online story about the call girl involved in the Eliot Spitzer scandal, Yahoos automated system was inviting readers to browse through photos of underage girls.
Yahoo Shortcuts, which more frequently offers to help readers search for news and Web sites on topics like “California” or “President Bush,” had in this case highlighted the words “underage girls.” Readers who passed their mouse over the phrase in The Associated Press story were shown a pop-up window with an image from Flickr, Yahoos photo-sharing Web site.
First thing every morning, Lynn Pitet, of Cody, Wyo., checks her computer to see whether her mother, Helen Trost, has gotten out of bed, taken her medication and whether she is moving around inside her house hundreds of miles away in Minnesota.
“A tag is a (relevant) keyword or term associated with or assigned to a piece of information (a picture, a geographic map, a blog entry, a video clip etc.), thus describing the item and enabling keyword-based classification and search of information.” [Wikipedia]
Tagging is the process of assigning keywords or phrases to items. To be more concrete, many of us may have collections of bookmarks in our web browser. Tagging each bookmark with a relevant term allows them to be classified and categorized semantically, or by meaning.
But one major downside of tagging is that the user has to actually do the tagging; quite a bit of work if you have many thousands of bookmarks (or emails or photographs). Rashmi Sinha was one of the first people to start thinking about the cognitive requirements (i.e., what happens in the head) when users tag. Here is a figure from her analysis:
Her bottom line is that tagging is efficient (compared to other methods of organization) because when we are trying to think of keywords, we have lots of choices that come to mind (stage 1). However, I see this as a potential downside. It’s a heavy decision step that must be repeated for every item one needs to tag.
Several new products are on the horizon that aim to automate this very step:
The first (which is in private beta and thus unavailable) is Twine. The New York Times wrote an interesting story about Twine and how it automatically scans your documents to obtain relevant keywords.
Sarah Miller, a librarian at Illinois Wesleyan University in Bloomington, became a member of Twine’s test group in November, partly because she and her husband, Ethan, a doctoral candidate, needed a place to organize all the documents they wanted to share with each other about teaching and learning.
Ms. Miller likes Twine’s mechanized tagging abilities.
“If I save the URL of a Web page into my Twine account,” she said, “Twine will skim the page and turn it into tags automatically. It’s a way to tie together things that my husband and I find over days, and months and years.”
Twine has an option that allows people to do their own descriptive tagging, just as they might, for instance, use the Web service del.icio.us to assign labels to Web sites to help keep track of them.
“But my tagging is inefficient,” Ms. Miller said. “Personal vocabulary changes. It’s difficult to be consistent.”
A less automated solution is a new product (also in private beta) called zigtag. Zigtag relies on you entering a keyword, but afterwards will suggest additional keywords.
After entering an appropriate tag for a page, the user is presented with a list of matching keywords, each of which has been defined in Zigtag’s database. For example, after entering “Apple” into the search field, I was able to choose from “the computer company”, “the pomaceous fruit”, and “the record company”, among others. The process is painless and the integrated dictionary is fairly comprehensive. If you happen to stumble across a term that isn’t defined, you can easily request to have it added to the dictionary (and can place your own temporary tag). [Tech Crunch]
While these are nice solutions, i’ve always imagined that one side benefit of tagging was that the very effortful process of tagging could contribute to a more durable memory trace (the classic “generation effect“). Incidentally, some limited research of mine (PDF) has not borne this out. But in reality, how well do we want to really remember our bookmarks? Most of us are satisfied that it is stored somewhere and are less interested in retrieving it later unaided.
I’ve heard a great deal about trust and automation over the years, but this has to be my favorite new example of over-reliance on a system.
GPS routed bus under bridge, company says
“The driver of the bus carrying the Garfield High School girls softball team that hit a brick and concrete footbridge was using a GPS navigation system that routed the tall bus under the 9-foot bridge, the charter company’s president said Thursday.Steve Abegg, president of Journey Lines in Lynnwood, said the off-the-shelf navigation unit had settings for car, motorcycle, bus or truck. Although the unit was set for a bus, it chose a route through the Washington Park Arboretum that did not provide enough clearance for the nearly 12-foot-high vehicle, Abegg said. The driver told police he did not see the flashing lights or yellow sign posting the bridge height.
“We haven’t really had serious problems with anything, but here it’s presented a problem that we didn’t consider,” Abegg said of the GPS unit. “We just thought it would be a safe route because, why else would they have a selection for a bus?””
Indeed, why WOULD “they” have a selection for a bus? Here is an excerpt from the manual (Disclosure: I am assuming it’s the same model):
“Calculate Routes for – Lets you take full advantage of the routing information built in the City Navigator maps. Some roads have vehicle-based restrictions. For example, a street or gate may be accessible by emergency vehicles only, or a residential street may not allow commercial trucking traffic. By specifying which vehicle type you are driving, you can avoid being routed through an area that is prohibited for your type of vehicle. Likewise, the ******** III may give you access to roads or turns that wouldn’t be available to normal traffic. The following options are available:
Truck (large semi-tractor/trailer
Emergency (ambulance, fire department, police, etc.)
Delivery (delivery vehicles)
Bicycle (avoids routing through interstates and major highways)
If we can assume no automation can be 100% reliable, at what point to people put too much trust in the system? At what point do they ignore the system in favor of more difficult methods, such as a paper map?At what point is a system so misleading that it should not be offered at all? Sanchez (2006) addressed this question and related type and timing of error to amount of trust placed in the automation. Trust declined sharply (for a time) after an error, so we may assume the Seattle driver might have re-checked the route manually had other (less catastrophic) errors occurred in the past.*
The spokesman for the GPS company is quoted in the above article as stating:
“Stoplights aren’t in our databases, either, but you’re still expected to stop for stoplights.”
I didn’t read the whole manual, but I’m pretty sure it doesn’t say the GPS would warn you of stoplights, a closer analogy to the actual feature that contributed to the accident. This is a time where an apology and a promise of re-design might serve the company better than blaming their users.
*Not a good strategy for preventing accidents!
Other sources for information on trust and reliability of automated systems:
There is an episode of the television show Seinfeld (“The Dealership“) where Kramer is test driving a car. During the test drive, Kramer notices the fuel gauge is empty and he wants to know how far he can drive before he really runs out of gas.
While I haven’t gone that far I like to see how fuel efficiently I can possibly drive. My car has a dynamic display of instant fuel economy in miles per gallon (my record is 37.5 MPG in a non-hybrid sedan).
Why do I do this? I don’t know–perhaps an innate competitiveness. But I know others who do this as well. Why not capitalize on this change in behavior by including more energy consumption displays in more products and even in the home. The image on the left is a new home energy monitor which tracks electricity, gas, and water usage.
Satellite navigation devices have been blamed for causing millions of pounds worth of damage to railway crossings and bridges. Network Rail claims 2,000 bridges are hit every year by lorries that have been directed along inappropriate roads for their size.
I guess it would be cost-prohibitive to put this bridge information into the GPS databases…