Category Archives: infovis

Neat SciAm Blog Post on InfoVis

This story was passed to me today by Matt Shipman, who writes about research on The Abstract.

An excerpt:

[Right – Comparison of two road signs, Highway Gothic on the left, Clearview on the right, 2007. Credit: Wikimedia Commons – click on link to see large]

The previous road sign font, Highway Gothic, was hard to read because of very small counter spaces, or the enclosed shapes of a letterform (the inside of an “O” or “P”). Clearview, with larger enclosed shapes, taller lowercase letters and better letterspacing, is easier to read from a distance and at night.

[Left – Clearview letterforms. Credit: Wikimedia Commons – click on link to see large]

Clearview improved drivers’ reading accuracy, reaction time, and recognition distance – all with a few small tweaks to the design. In this case, proper type is crucial for public health and safety.”

Really, it is worth reading the whole article. Enjoy!

Intuitive Interfaces for Software Developers

Here is a link to some neat new research being done by my colleagues at NCSU.  It’s about the development of a tool that instantly changes the look of software code as it’s being developed, allowing for different ways to investigate bugs and features, but without changing the code in any way that might introduce errors. Dr. Emerson Murphy-Hill developed the interface for this “refactoring” of code and published on it this past semester.

From the article:

Making Refactoring Tools More Attractive For Programmers

“The researchers designed the marking menus so that the refactoring tools are laid out in a way that makes sense to programmers. For example, tools that have opposite functions appear opposite each other in the marking menu. And tools that have similar functions in different contexts will appear in the same place on their respective marking menus.

Early testing shows that programmers were able to grasp the marking menu process quickly, and the layout of the tools within the menus was intuitive.”

 

Miller Column Inception (or the geekiest movie you’ll see today)

Miller Columns are the browsing/visualization technique used in the Mac OSX Finder. It was inherited from the NeXT operating system (one of my favorites). I personally prefer this to the tree view that’s common in Windows Explorer.

The embedded video below summarizes the essential action of the movie Inception (spoiler alert!):

INCEPTION_FOLDER from chris baker on Vimeo.

(via Kottke)

Designer of movie UIs to design real UIs

We’ve discussed Mark Coleran before with his fantastical work with those fake user interfaces you see in movies (see movie below).   According to this Fast Company blog post he will have a hand in designing real interfaces.

But Coleran doesn’t just throw out the rule books on user experience and “human interface guidelines.” In fact, because many of his clients know his movie work, he spends a lot of time talking them out of doing something like Children of Men or The Bourne Ultimatum. “One of my biggest frustations is when people will say, ‘We have these specifications and requirements, now execute it just like we saw in the movie,'” he says. “What they don’t realize is that the requirements for those movie FUIs were completely unlike the ones that they’re dealing with. In a movie, you see an interface for at most a couple of seconds. In real life, every design decision has a consequence, and it doesn’t go away. It’s there day in and day out. Those human interface guidelines are there for very good reasons.”

Coleran Reel 2008.06 HD from Mark Coleran on Vimeo.

“Having the Data is not enough” – Visualization Techniques

I do love good visualization. I think animations like this, accompanied by a good story, would serve us well from conference presentations to convincing industry clients.

It is from the “Joy of Stats,” on the BBC (which I’m apparently not allowed to watch due to my location.)

3D is better than 2D, right?

It seems that every few years, 3D technology is in the zeitgeist (with 3d movies).  User interfaces are not immune to the frenzy of 3D.  However, there is quite a bit of past research in 3D interfaces (I won’t even scratch the surface but see this simple Google Scholar search to start). Much, but not all, relate to navigation in virtual environments, while other research relates to the inclusion of use of depth/perspective.  There are still many outstanding issues in the use of 3d in user interfaces; some of which are: use/interaction (input, output), effects on workload, and effects on learning.

In general, 3-dimensional displays (like a perspective view) are perceived to be more natural and possibly require less mental integration than 2-dimensional displays (see this very well-researched U.S./FAA report on multifunction displays; warning PDF).  Some of the logic goes like this:  when I view a 2d map, I usually turn it into a 3d representation in my head.  Showing a 3d representation removes this step (in addition to showing more information).  Compare the two types of information displays:

2d and 3d representations of a hiking trail
2d and 3d representations of a hiking trail

These images come from a user study examining user preferences in map presentation (2d or 3d). The research showed that it depends.  The preference data was very complex (see paper) but the preferences were evenly split but those aged 26-40 preferred 3d maps. Males preferred 2d maps while females preferred 3d maps (which seems surprising).

Personally, I switch between 2d and 3d view when I can because each offers information the other does not. I like to examine hikes after the fact (collecting and mapping GPS data). See below; each view gives you different information:

Left: 2d, top-down view. Right: 3d, perspective view

The 2d view gives a good general overview and the intricacies of the trail but shows no elevation information while the 3d view shows terrain but obscures the path (part of it is hidden behind the terrain).

More subtle uses of 3D on websites is the use of the parallax for the illusion of depth. This website showcases some creative uses of this effect. Most websites use it for aesthetic effect, however, I noticed the new Google Nexus One phone uses it in a subtle but useful way to indicate that you are on a different screen (a type of low-level feedback). See the video below. When the user slides to another screen horizontally, the animation of the galaxy changes perspective:

Notice the slight perspective shift of the galaxy background
Notice the slight perspective shift of the galaxy background after the user swiped the screen

Embedded video (skip to 61 seconds in):

In some cases, when used appropriately, learning can be enhanced by the use of 3d.  Researchers Avi Parush and Dafna Berman (in a 2004 paper in the International Journal of Human-Computer Studies) were interested in the use of 3D interfaces for navigation and orientation in a virtual environment.  The virtual environment contained the objects that one would normally have on their computer desktop (e.g., files, applications).  Will the use of a 3d environment enhance learning and performance?  They manipulated whether subjects had two kinds of aids to help: landmarks or a route list.  They found that both types of aids help in the learning process but a crucial point was that landmark placement (in either 2d or 3d) was critical.

Example of the 3d condition (left, no landmarks; right, landmarks). From Parush & Berman (2004)

One commercially available tool that gives users this kind of view of their computer is the BumpTop desktop:

BumpTop Desktop

The BumpTop desktop introduces the further complication (on top of 3d) of the nature of interaction.  You are using a 2d surface (the touch pad or mouse) to navigate the 3d environment and in some cases using multi-touch gestures (using more than 1 finger).  Very cool…but useful?  See for yourself:

(post image: http://www.flickr.com/photos/minusbaby/4185007435/)

Data visualization tools

Foreshadowing Anne’s upcoming series of posts on large, public, and free data sets, here are two interesting tools to help you visualize massive quantities of data. First, my grad student Margaux informed me of Google Fusion Tables (shown above). The site lets you upload data and visualize it in different ways. The website has some samples.

From the website:

Look at public data.

Get started with an interesting data set from the Table Gallery.

Import your own.

Upload data tables from spreadsheets or csv files. During our labs release, we can support up to 100MB per table, and up to 250MB per user. You can export your data as csv too.

Visualize it instantly.

See the data on a map or as a chart immediately. Columns with locations are interpreted automatically, and you can adjust them directly on a map if necessary. Use filter and aggregate tools for more selective visualizations

Pivot

Microsoft Pivot

Microsoft Pivot

Microsoft Pivot

Next, is Microsoft’s Pivot. Instead of being a web-based service, it is a program that runs on your computer (Windows only). It is currently available by invite-only (invite code available from TechCrunch) and I just installed it but the installer encountered an error so I am not yet able to play with it. But from what I’ve seen, it really provides a type of faceted browsing front-end for disparate sources of data.

Pivot makes it easier to interact with massive amounts of data in ways that are powerful, informative, and fun. We tried to step back and design an interaction model that accommodates the complexity and scale of information rather than the traditional structure of the Web.

TechCrunch got a sneak preview and wrote up a more detailed description:

The best way to understand the importance of Pivot is through a real-world example of how this technology would work. So let’s say I wanted a visualization of all the Wikipedia links to TechCrunch, Pivot would essentially crawl all of Wikipedia and create a map of the Wikipedia pages that are connected to TechCrunch, such as Michael Arrington’s Wikipedia page.

Another real-world use of Pivot is extracting data from Facebook. For example, you can use Pivot to crawl Facebook and break down friends by various data points like relationship status or college. Microsoft has an interesting example of Pivot being used to sort through Sports Illustrated covers, where you can break down covers into verticals by type of sport, team, athlete and more.

Usability/Design/HF Potpourri



Graph extravaganza: Who are our “users”?

It’s Friday, so here are some interesting visitor statistics of the blog (based on the last 1580 visitors).  I meant to do this on our two year anniversary (two months ago) but better late than never.

First, where are our visitors coming from?  Primarily in the U.S. and Europe with some visits from China.

world
Visitors by Country (click to expand)

Zooming in on the U.S. and Europe:

na
United States (click to zoom)
eu
Europe (click to zoom)

Next, what operating systems, browsers, and screen resolutions are our users using?  It appears to be Windows XP with IE.

os stats
Operating systems
browsers
Web browsers (the multiple versions of Safari could also indicate Webkit browsers like iPhone, Android and Palm Pre)
resolutions
Screen resolutions (unknowns probably represent wide screen monitors or mobile phone sizes)

Finally, when people reach us using a search engine, which one do they use?

search engines
Search engines

Usability issues in navigating your life

BookGordon Bell, a Microsoft Researcher, is recording his life in excruciating detail in a project dubbed MyLifeBits:

Web sites he’s visited (221,173), photos taken (56,282), emails sent and received (156,041), docs written and read (18,883), phone conversations had (2,000), photos snapped by the SenseCam hanging around his neck (66,000), songs listened to (7,139), and videos taken by him (2,164).

Why is he doing this?  He sees some appeal in the ability to always remember:

By using e-memory as a surrogate for meat-based memory, he argues, we free our minds to engage in more creativity, learning, and innovation (sort of like Getting Things Done without all those darn Post-its).

In a work context, this is true.  A large part of my time is spent looking for files or trying to remember.

A whole slew of interesting human factors and usability questions are elephants in the room:

  • Currently, a portion of the recording is done manually.  How and what should be automated?
  • How does one efficiently search/browse through potentially petabytes of lifedata?  I don’t think a search engine would suffice (not all material would be textual).
  • This seems to solve the “encoding” problem in memory.  But it wreaks havoc with the “retrieval” portion.  You still need a good retrieval cue.
  • What are the implications of off-loading so much memory?  How will it change the way we currently learn/work?
  • As a type of automation, what will happen when it fails or is unreliable?
  • What are the privacy implications of recording this much data (especially the sensecam)?

His book outlining this idea comes out September 17th (Amazon link).