Here is a link to some neat new research being done by my colleagues at NCSU. It’s about the development of a tool that instantly changes the look of software code as it’s being developed, allowing for different ways to investigate bugs and features, but without changing the code in any way that might introduce errors. Dr. Emerson Murphy-Hill developed the interface for this “refactoring” of code and published on it this past semester.
“The researchers designed the marking menus so that the refactoring tools are laid out in a way that makes sense to programmers. For example, tools that have opposite functions appear opposite each other in the marking menu. And tools that have similar functions in different contexts will appear in the same place on their respective marking menus.
Early testing shows that programmers were able to grasp the marking menu process quickly, and the layout of the tools within the menus was intuitive.”
Miller Columns are the browsing/visualization technique used in the Mac OSX Finder. It was inherited from the NeXT operating system (one of my favorites). I personally prefer this to the tree view that’s common in Windows Explorer.
The embedded video below summarizes the essential action of the movie Inception (spoiler alert!):
We’ve discussed Mark Coleran before with his fantastical work with those fake user interfaces you see in movies (see movie below). According to this Fast Company blog post he will have a hand in designing real interfaces.
But Coleran doesn’t just throw out the rule books on user experience and “human interface guidelines.” In fact, because many of his clients know his movie work, he spends a lot of time talking them out of doing something like Children of Men or The Bourne Ultimatum. “One of my biggest frustations is when people will say, ‘We have these specifications and requirements, now execute it just like we saw in the movie,'” he says. “What they don’t realize is that the requirements for those movie FUIs were completely unlike the ones that they’re dealing with. In a movie, you see an interface for at most a couple of seconds. In real life, every design decision has a consequence, and it doesn’t go away. It’s there day in and day out. Those human interface guidelines are there for very good reasons.”
It seems that every few years, 3D technology is in the zeitgeist (with 3d movies). User interfaces are not immune to the frenzy of 3D. However, there is quite a bit of past research in 3D interfaces (I won’t even scratch the surface but see this simple Google Scholar search to start). Much, but not all, relate to navigation in virtual environments, while other research relates to the inclusion of use of depth/perspective. There are still many outstanding issues in the use of 3d in user interfaces; some of which are: use/interaction (input, output), effects on workload, and effects on learning.
In general, 3-dimensional displays (like a perspective view) are perceived to be more natural and possibly require less mental integration than 2-dimensional displays (see this very well-researched U.S./FAA report on multifunction displays; warning PDF). Some of the logic goes like this: when I view a 2d map, I usually turn it into a 3d representation in my head. Showing a 3d representation removes this step (in addition to showing more information). Compare the two types of information displays:
These images come from a user study examining user preferences in map presentation (2d or 3d). The research showed that it depends. The preference data was very complex (see paper) but the preferences were evenly split but those aged 26-40 preferred 3d maps. Males preferred 2d maps while females preferred 3d maps (which seems surprising).
Personally, I switch between 2d and 3d view when I can because each offers information the other does not. I like to examine hikes after the fact (collecting and mapping GPS data). See below; each view gives you different information:
The 2d view gives a good general overview and the intricacies of the trail but shows no elevation information while the 3d view shows terrain but obscures the path (part of it is hidden behind the terrain).
More subtle uses of 3D on websites is the use of the parallax for the illusion of depth. This website showcases some creative uses of this effect. Most websites use it for aesthetic effect, however, I noticed the new Google Nexus One phone uses it in a subtle but useful way to indicate that you are on a different screen (a type of low-level feedback). See the video below. When the user slides to another screen horizontally, the animation of the galaxy changes perspective:
Embedded video (skip to 61 seconds in):
In some cases, when used appropriately, learning can be enhanced by the use of 3d. Researchers Avi Parush and Dafna Berman (in a 2004 paper in the International Journal of Human-Computer Studies) were interested in the use of 3D interfaces for navigation and orientation in a virtual environment. The virtual environment contained the objects that one would normally have on their computer desktop (e.g., files, applications). Will the use of a 3d environment enhance learning and performance? They manipulated whether subjects had two kinds of aids to help: landmarks or a route list. They found that both types of aids help in the learning process but a crucial point was that landmark placement (in either 2d or 3d) was critical.
One commercially available tool that gives users this kind of view of their computer is the BumpTop desktop:
The BumpTop desktop introduces the further complication (on top of 3d) of the nature of interaction. You are using a 2d surface (the touch pad or mouse) to navigate the 3d environment and in some cases using multi-touch gestures (using more than 1 finger). Very cool…but useful? See for yourself:
Foreshadowing Anne’s upcoming series of posts on large, public, and free data sets, here are two interesting tools to help you visualize massive quantities of data. First, my grad student Margaux informed me of Google Fusion Tables (shown above). The site lets you upload data and visualize it in different ways. The website has some samples.
From the website:
Look at public data.
Get started with an interesting data set from the Table Gallery.
Import your own.
Upload data tables from spreadsheets or csv files. During our labs release, we can support up to 100MB per table, and up to 250MB per user. You can export your data as csv too.
Visualize it instantly.
See the data on a map or as a chart immediately. Columns with locations are interpreted automatically, and you can adjust them directly on a map if necessary. Use filter and aggregate tools for more selective visualizations
Next, is Microsoft’s Pivot. Instead of being a web-based service, it is a program that runs on your computer (Windows only). It is currently available by invite-only (invite code available from TechCrunch) and I just installed it but the installer encountered an error so I am not yet able to play with it. But from what I’ve seen, it really provides a type of faceted browsing front-end for disparate sources of data.
Pivot makes it easier to interact with massive amounts of data in ways that are powerful, informative, and fun. We tried to step back and design an interaction model that accommodates the complexity and scale of information rather than the traditional structure of the Web.
TechCrunch got a sneak preview and wrote up a more detailed description:
The best way to understand the importance of Pivot is through a real-world example of how this technology would work. So let’s say I wanted a visualization of all the Wikipedia links to TechCrunch, Pivot would essentially crawl all of Wikipedia and create a map of the Wikipedia pages that are connected to TechCrunch, such as Michael Arrington’s Wikipedia page.
Another real-world use of Pivot is extracting data from Facebook. For example, you can use Pivot to crawl Facebook and break down friends by various data points like relationship status or college. Microsoft has an interesting example of Pivot being used to sort through Sports Illustrated covers, where you can break down covers into verticals by type of sport, team, athlete and more.
Web sites he’s visited (221,173), photos taken (56,282), emails sent and received (156,041), docs written and read (18,883), phone conversations had (2,000), photos snapped by the SenseCam hanging around his neck (66,000), songs listened to (7,139), and videos taken by him (2,164).
Why is he doing this? He sees some appeal in the ability to always remember:
By using e-memory as a surrogate for meat-based memory, he argues, we free our minds to engage in more creativity, learning, and innovation (sort of like Getting Things Done without all those darn Post-its).
In a work context, this is true. A large part of my time is spent looking for files or trying to remember.
A whole slew of interesting human factors and usability questions are elephants in the room:
Currently, a portion of the recording is done manually. How and what should be automated?
How does one efficiently search/browse through potentially petabytes of lifedata? I don’t think a search engine would suffice (not all material would be textual).
This seems to solve the “encoding” problem in memory. But it wreaks havoc with the “retrieval” portion. You still need a good retrieval cue.
What are the implications of off-loading so much memory? How will it change the way we currently learn/work?
As a type of automation, what will happen when it fails or is unreliable?
What are the privacy implications of recording this much data (especially the sensecam)?
His book outlining this idea comes out September 17th (Amazon link).
There have been many recent examples of consumer friendly augmented reality applications for smart phone users. I remember reading about augmented reality research over a decade ago (in an HCI class) and remembering how bulky, expensive, experimental, and out-of-reach it seemed back then. The systems back then required head-mounted displays and were physically attached to cameras and large computers. Now it is available for any iPhone or Android smartphone user.
The first example below overlays subway signage and directional arrows to help find your way around the NY subway. This seems great for tourists who may not be regular users of the metro (wish I had this when I was in the Netherlands last month).
Speaking of the Netherlands, the second example is for Android phones and overlays information about bars, restaurants, and houses for sale in Amsterdam:
These are certainly impressive examples of augmented reality. But another fun and simple recent example is the ball tracker that was used by ESPN:
It is implied but one possible reason we like these (we as in “users”) is that augmented reality applications pre-integrate information for us (in the first two examples) reducing the need for us to do it ourselves (a working memory and time-intensive activity) or they keep information in sensory memory longer than is usually available (ball path) letting us see patterns that would otherwise be invisible.