The Kindle e-book reader has great promise, especially for students. Who wouldn’t want to trade in a bunch of heavy books for a slim electronic device? Amazon partnered with Princeton to see how students would interact with the device. The results are not good. The student’s comments sound vaguely familiar to my own experiences. The Kindle is great as a pleasure book reader, but not so good for academic use. One student in the trial eloquently commented that:
Much of my learning comes from a physical interaction with the text: bookmarks, highlights, page-tearing, sticky notes and other marks representing the importance of certain passages — not to mention margin notes, where most of my paper ideas come from and interaction with the material occurs,” he explained. “All these things have been lost, and if not lost they’re too slow to keep up with my thinking, and the ‘features’ have been rendered useless.
One of the loudest complaints seem to center on the inability to easily take notes (which was one of my major gripes). The human factors lesson? Task analysis: learn how the user does it now, and then replicate or improve it, but don’t interfere with it. I can’t beat up on the Kindle too much (I really *want* to like it) because presumably it was not designed for academic use.
I‘m glad that open-source software is taking usability seriously. I think that not having a good user experience may be one of the biggest hurdles to more open-source adoption (e.g., compare GIMP to Photoshop). Shuttleworth (of Ubuntu) has a great plan: the STFU protocol.
During his keynote, he extended an invitation to any open source application to submit their software for testing by user-experience experts. The sessions would be recorded for posterity, and the developer would not be able to interact with the user.
“If the developer is in the room, they have to say nothing. It’s the shut the f— up protocol,” Shuttleworth said. “You sit and watch someone struggle with the software that you’ve so lovingly produced.”
Electric cars are utterly silent making them hazardous when they sneak up on you at low speeds. Nissan is thinking about having their Leaf electric car emit the whine reminiscent of the flying cars in Bladerunner. It’s one of my favorite movies so I approve!
“We decided that if we’re going to do this, if we have to make sound, then we’re going to make it beautiful and futuristic,” Tabata said.
The company consulted Japanese composers of film scores. What Tabata and his six-member team came up with is a high- pitched sound reminiscent of the flying cars in “Blade Runner,” the 1982 film directed by Ridley Scott portraying his dystopian vision of 2019.
Time.com is reporting that part of the economic recession may have been caused by Warren Buffet not being able to check his voice mail:
as Buffett was rushing out to a social engagement in Edmonton, Alberta, he got a call from Bob Diamond, the head of Barclays Capital…[ed. Diamond was creating a plan to save an investment bank and needed money from Buffett]…
Fast forward 10 months. Buffett, who admits he never has really learned the basics of his cell phone, asked his daughter Susan about a little indicator he had noticed on the screen: “Can you figure out what’s on there?” It turned out to be the message from Diamond that he had been waiting for that night.
The NYTimes has an interesting OpEd where they asked various designers to re-imagine the homeland advisory system. It’s a multimedia presentation with narration from the graphic designers. Not much warnings research but interesting. Here is what it looks like now:
and here is one proposed redesign that, according to the designer, takes advantage of our ability read emotions from eyes:
Web sites he’s visited (221,173), photos taken (56,282), emails sent and received (156,041), docs written and read (18,883), phone conversations had (2,000), photos snapped by the SenseCam hanging around his neck (66,000), songs listened to (7,139), and videos taken by him (2,164).
Why is he doing this? He sees some appeal in the ability to always remember:
By using e-memory as a surrogate for meat-based memory, he argues, we free our minds to engage in more creativity, learning, and innovation (sort of like Getting Things Done without all those darn Post-its).
In a work context, this is true. A large part of my time is spent looking for files or trying to remember.
A whole slew of interesting human factors and usability questions are elephants in the room:
Currently, a portion of the recording is done manually. How and what should be automated?
How does one efficiently search/browse through potentially petabytes of lifedata? I don’t think a search engine would suffice (not all material would be textual).
This seems to solve the “encoding” problem in memory. But it wreaks havoc with the “retrieval” portion. You still need a good retrieval cue.
What are the implications of off-loading so much memory? How will it change the way we currently learn/work?
As a type of automation, what will happen when it fails or is unreliable?
What are the privacy implications of recording this much data (especially the sensecam)?
His book outlining this idea comes out September 17th (Amazon link).