Story in the Washington Post about the impending demise of the computer mouse in favor of touch screens:
“Most children here have never seen a computer mouse,” said Hannah Tenpas, 24, a kindergarten teacher at San Antonio.
“The popularity of iPads and other tablets is changing how society interacts with information,” said Aniket Kittur, an assistant professor at the Human-Computer Interaction Institute at Carnegie Mellon University. “. . . Direct manipulation with our fingers, rather than mediated through a keyboard/mouse, is intuitive and easy for children to grasp.”
I realize the media needs a strong narrative to make an interesting story but the mouse is nowhere near dead. The story is more complicated and completely depends on the task. There are certain applications where the precise pointing afforded by mice are just too cumbersome with touch screens.
I recently published a study (conducted last year) on automation trust and dependence. In that study, we pseudo-wizard-of-oz’ed a smartphone app that would help diabetics manage their condition.
We had to fake it because there was no such app and it would be to onerous to program it (and we weren’t necessarily interested in the app, just a form of advanced, non-existent automation).
Now, that app is real. I had nothing to do with it but there are now apps that can help diabetics manage their condition. This NYT article discusses the complex area of healthcare apps:
Smartphone apps already fill the roles of television remotes, bike speedometers and flashlights. Soon they may also act as medical devices, helping patients monitor their heart rate or manage their diabetes, and be paid for by insurance.
The idea of medically prescribed apps excites some people in the health care industry, who see them as a starting point for even more sophisticated applications that might otherwise never be built. But first, a range of issues — around vetting, paying for and monitoring the proper use of such apps — needs to be worked out.
The focus of the article is on regulatory hurdles while our focus (in the paper) was how potential patients might accept and react to advice given by a smartphone app.
This humorous NYT article discusses the foibles of auto-correct on computers and phones. Auto-correct, a more advanced type of the old spell checker, is a type of automation. We’ve discussed automation many times on this blog.
But auto-correct is unique in that it’s probably one of the most frequent touchpoints between humans and automation.
The article nicely covers, in lay language, many of the concepts of automation:
Out of the loop syndrome:
Who’s the boss of our fingers? Cyberspace is awash with outrage. Even if hardly anyone knows exactly how it works or where it is, Autocorrect is felt to be haunting our cellphones or watching from the cloud.
We are collectively peeved. People blast Autocorrect for mangling their intentions. And they blast Autocorrect for failing to un-mangle them.
I try to type “geocentric” and discover that I have typed “egocentric”; is Autocorrect making a sort of cosmic joke? I want to address my tweeps (a made-up word, admittedly, but that’s what people do). No: I get “twerps.” Some pairings seem far apart in the lexicographical space. “Cuticles” becomes “citified.” “Catalogues” turns to “fatalities” and “Iditarod” to “radiator.” What is the logic?
One more thing to worry about: the better Autocorrect gets, the more we will come to rely on it. It’s happening already. People who yesterday unlearned arithmetic will soon forget how to spell. One by one we are outsourcing our mental functions to the global prosthetic brain.
Humorously, even anthropomorphism of automation (attributing human-like characteristics to it, even unintentially)! (my research area):
Peter Sagal, the host of NPR’s “Wait Wait … Don’t Tell Me!” complains via Twitter: “Autocorrect changed ‘Fritos’ to ‘frites.’ Autocorrect is effete. Pass it on.”
Interesting research (PDF link) on how people behave when robots are wrong. In a recent paper, researchers created a situation where a robot mis-directed a human in a game. In follow-up interviews, one of the striking findings that caught my eye was:
When asked whether Robovie was a living being, a technology, or something in-between, participants were about evenly split between “in-between” (52.5%) and “technological” (47.5%). In contrast, when asked the same question about a vending machine and a human, 100% responded that the vending machine was “technological,” 90% said that a human was a “living being,” and 10% viewed a human as “in-between.”
The bottom line was that a large portion of the subjects attributed some moral/social responsibility to this machine.
Taken broadly, the results from this study – based on both behavioral and reasoning data – support the proposition that in the years to come many people will develop substantial and meaningful social relationships with humanoid robots.
Here is a short video clip of how one participant reacted upon discovering Robovie’s error.
I wonder if similar results would be found when people interact with (and make attributions to) less overtly humanoid systems (disembodied automated systems like a smartphone app).
The University of Ottawa is considering a proposal which would give its professors the power to ban laptops and other electronic devices in the classroom.
Professors say everything from texting to time on Facebook is allowing their students to do everything but learn.
“They are distracted and we are competing with that for their attention,” says University of Ottawa professor Marcel Turcotte who voted in favour of the policy.
“You see one student who is really not listening, would be watching the video and then it’s kind of contagious,” says Turcotte.
As a professor, I see my share of this as well. Every classroom has wireless and it’s just too tempting to browse Facebook and other non-relevant sites while in class. A student once told me that they are distracted by OTHER people’s laptops when that other student is watching Youtube or browsing Facebook: secondhand distraction.
I happen to see more phone texting in my classes. <begin RANT>My opinion is that there is nothing special about a laptop where it deserves special treatment over any other technology (it’s not a magical note-taking tool). If we take a more critical analysis of what the students and administrator say in the article:
But many students say they learn better with a laptop and the vice president of the university’s student federation says it’s an important tool.
What does that mean? “Learn better”? How do they know? And what does “important tool” mean? Again, it’s just a word processor; not a magical note-taking tool. It’s attitudes and implicit assumptions like this (more specifically, a blind, unquestioning trust that the simple PRESENCE of a high technology tool will inevitably lead to better outcomes; it HAS to, it’s HIGH TECH!) that’s a major problem. It’s marketing speak by companies who want to sell and integrate very expensive technology into our cars, classrooms, phones, and offices and administrators just eat it up. What problem is being solved? <end RANT>
With the introduction of “the new iPad” (i.e., iPad 3) I thought it would be a good time to update one of the most popular posts on this blog. That post was about incorporating an iPad into my daily work and play routine. It was written when the iPad was first introduced in 2010 and was mostly an exploration of some initial impressions and app suggestions from the perspective of an academic (non-student, higher education).
Based on the incredible popularity of that and the updated post it’s clear that many academics would like to incorporate the iPad into their workflow. My work is probably very similar to a generic office worker: lots of reading (mostly scanned journal article PDFs, writing, light note-taking, presentations, & data analysis.
In the years since I got first got the iPad, I’ve slowly learned what tasks can best be accomplished with the iPad and which should be left to the computer. I’ve also downloaded and deleted a large variety of apps whittling down until I find one (or three) that works best.
I’ve also since moved on to the iPad 2. It was a nice upgrade because it was dramatically thinner and lighter than the original iPad which made holding it more comfortable. The increased speed also made reading the scanned PDFs more pleasant. This is why I can’t wait for the iPad 3: more speed and higher resolution screen will significantly affect my most frequent tasks (see below).
This post is organized around my common work tasks and the apps I use most frequently. I don’t discuss the built in mail program, calendar, or web browser (which are heavily used).
Most of my library of thousands of PDFs are scanned journal articles. A smaller but growing portion of the newer articles are non-scanned PDFs that were created by the publisher. The difference is that the scanned PDFs are usually bigger and slightly fuzzier.
My original suggested app was iAnnotate mainly because of its ability to directly annotate PDFs with notes and scribbles. But I kept Goodreader for just plain reading because it seemed faster and more intuitive. Fortunately, Goodreader has kept improving and it’s now my most-used PDF application. The best feature is integration with Dropbox; so I only have to point it to a folder to download a semester’s worth of PDFs.
As good as Goodreader is, there are times when I need to move between PDF pages quickly and would like an alternative to page flipping. In that case I use PDF Expert since it has a nice birds-eye view of 9 pages but it just seems slower in page rendering.
I still use the iPad for light note-taking in meetings or by myself. I find it sufficient for most of my needs especially if you add a few accessories. In my previous post, I mentioned Evernote. I don’t really actively use Evernote much anymore. I can’t quite put my finger on it yet but it’s just not the right app/service for me. I notice that I tend to just dump things into it that I think i’ll need later but end up not needing.
Instead, I use a few note taking tools; none of which are preferred yet. The software keyboard is still sufficient for 80% of my needs. I’m able to type relatively fast and error free. For typewritten notes, I’ll use the built-in Notes application (which syncs to cloud services).
When I’m traveling light (and I always am) but I know i’ll need to type out some e-mails or do some other writing, a great hardware accessory is the low-cost Amazon Bluetooth keyboard. It’s only about $35 (half the price of the metal Apple-branded accessory keyboard) and has a relatively nice feel for such a small keyboard. The great thing is that I only take it when I REALLY want a hardware keyboard which is not all the time.
On the rare occasion that I need to capture handwriting I don’t have a favorite app; instead there are 2 or 3 that each have something the others do not. As an aside, some people think they want hand writing but I’m not one of them. My handwriting is horribly mangled and unreadable unless I concentrate. Plus, handwritten notes are not usually text-searchable.
First, my usual app is called Notes Plus. It recently underwent a major upgrade with some pretty amazing features like split-screen viewing of a web page while you take notes and audio recording:
But I really hate the silver/metal look. I sometimes alternate and use Ghostwriter for handwritten notes or if I need to make a drawing:
Both of these applications export their notes into Evernote, Dropbox, or plain PDFs. When I am handwriting (again, which is probably less than 5% of the time) I use a cheap stylus from Amazon.
Finally, I’ve been editing presentations more on the iPad since switching to the Keynote presentation app on my desktop. When I need to organize my lectures or work on a presentation, the Keynote iPad app is surprisingly powerful but easy to use. I’m amazed that so much functionality could be built into a touch-only app:
I still use my laptop to actually give the presentation because I like to view the upcoming slide and the iPad currently just mirrors the current slide. I also use in-class clickers which require a laptop.
Other Useful Utilities
Finally, there are a few add-ons or apps that I find useful. The first is Wikipanion (yes, it’s OK to use Wikipedia). Wikipanion is a nice app front end to Wikipedia:
The second, Offline Pages, is an app that allows you to download full web pages or websites for off-line viewing (e.g., on a plane).
Finally, there are times when you want to send a link or snippet of text from your desktop computer to your iPad. A useful app/service is Prowl. When you sign up for and then install the Prowl app and browser extension, you can send links directly from your browser to your iPad.
Another bonus is that once you sign up for the Prowl service and install an app on your desktop computer, you can also send text snippets from anywhere on your computer (e.g., a telephone number, address, paragraph of text) to your iPad.
What I Don’t/Can’t Do
Based on the number of hits the iPad posts have received from the following search term: “SPSS and iPad” there seems to be a bit of a demand…are you listening IBM?
To be honest, I don’t know if I want to be analyzing data on the iPad anyway. However, most data analysis is pointing and clicking so knows; who maybe some creative developer will create a data analysis application perfectly suited to a touch only interface.
I do a fair amount of programming and it would just be unbearable to do that on an iPad.
It’s election season which means more opportunities to point, laugh, and cry at the state of voting usability. The first is sent in by Kim W. As part of an NPR story, the reporter dug up a sample ballot. Pretty overwhelming and confusing (“vote for not more than one”??); makes me long for electronic voting.
Next, Ford is sending out a software update to their popular MyTouch car telematics system. The following NYT article is excellent in highlighting the importance of not only basic usability but that “user experience” is just as important as technical capability/specs. The article lists a variety of usability quirks that should have been caught in user testing (e.g., “a touch-sensitive area under the touch screen that activates the hazard lights has been replaced with a mechanical button, because Ford learned that drivers were inadvertently turning on the hazard lights as they rested their hand while waiting for the system to respond.”).
I am being facetious when I point an laugh but seriously, many of these issues could have been caught early with basic, relatively cheap, simple user testing.
“I think they were too willing to rush something out because of the flashiness of it rather than the functionality,” said Michael Hiner, a former stock-car racing crew chief in Akron, Ohio, who bought a Ford Edge Limited last year largely because he and his wife were intrigued by MyFord Touch.
Now Ford has issued a major upgrade that redesigns much of what customers see on the screen and tries to resolve complaints about the system crashing or rebooting while the vehicle is being driven. Ford said on Monday that the upgrade made the touch screens respond to commands more quickly, improved voice recognition capabilities and simplified a design that some say had the potential to create more distractions for drivers who tried to use it on the road. Fonts and buttons on the screen have been enlarged, and the layouts of more than 1,000 screens have been revamped.
One thing that annoys me is the silly argument that paper is bad or paper kills. Such hollow arguments are used to encourage technology adoption in airplane cockpits, the class room, and hospitals. Usually they are associated with silly statistics about how much paper is saved or how much less weight is carried, or how much easier it will be to look through documents (I use an iPad to hold hundreds of articles and while I can *hold* more articles, it has not translated to more reading and it does not improve my reading comprehension at all).
Hospitals and doctors’ offices, hoping to curb medical error, have invested heavily to put computers, smartphones and other devices into the hands of medical staff for instant access to patient data, drug information and case studies.
But like many cures, this solution has come with an unintended side effect: doctors and nurses can be focused on the screen and not the patient, even during moments of critical care. And they are not always doing work; examples include a neurosurgeon making personal calls during an operation, a nurse checking airfares during surgery and a poll showing that half of technicians running bypass machines had admitted texting during a procedure.
In the “why didn’t I think of this!” department, we have the Little Printer Concept by Berg. It basically seems like a cash register thermal printer (in much nicer packaging) that sits in your home and prints messages, puzzles, etc.
I could see this being very useful for older consumers who are resistant to technology. Imagine printing medication instructions or doctor appointment reminders or any reminder. Another use might be adult children using it to send their parents messages that they can rip and read anywhere.
I love the simplicity of the design and the fact that you can take the output anywhere you want (unlike a WIFI-digital picture frame or other “high tech” solution). I really hope this product comes to market. The video is definitely worth a look.