Usability issues in navigating your life

BookGordon Bell, a Microsoft Researcher, is recording his life in excruciating detail in a project dubbed MyLifeBits:

Web sites he’s visited (221,173), photos taken (56,282), emails sent and received (156,041), docs written and read (18,883), phone conversations had (2,000), photos snapped by the SenseCam hanging around his neck (66,000), songs listened to (7,139), and videos taken by him (2,164).

Why is he doing this?  He sees some appeal in the ability to always remember:

By using e-memory as a surrogate for meat-based memory, he argues, we free our minds to engage in more creativity, learning, and innovation (sort of like Getting Things Done without all those darn Post-its).

In a work context, this is true.  A large part of my time is spent looking for files or trying to remember.

A whole slew of interesting human factors and usability questions are elephants in the room:

  • Currently, a portion of the recording is done manually.  How and what should be automated?
  • How does one efficiently search/browse through potentially petabytes of lifedata?  I don’t think a search engine would suffice (not all material would be textual).
  • This seems to solve the “encoding” problem in memory.  But it wreaks havoc with the “retrieval” portion.  You still need a good retrieval cue.
  • What are the implications of off-loading so much memory?  How will it change the way we currently learn/work?
  • As a type of automation, what will happen when it fails or is unreliable?
  • What are the privacy implications of recording this much data (especially the sensecam)?

His book outlining this idea comes out September 17th (Amazon link).

4 thoughts on “Usability issues in navigating your life”

  1. How does one efficiently search/browse through potentially petabytes of lifedata? I don’t think a search engine would suffice (not all material would be textual).

    – Text is easy, as you mentioned.
    – Spoken text can also be handled relatively well.
    – While I haven’t tried it, I’ve heard that the software that Apple has for finding like images works pretty well, so that might cover visual images.
    – I have seen some tools to identify songs from a few notes, so that might apply to non-verbal audio.

    The ones that I’d really like to see would involve smell and taste. It seems like very little has been done to try to detect these stimuli in a meaningful manner. I’d love to have some help when I am faced with the “where do I know that smell from?” question.

Comments are closed.