Square Cash is a great service – it allows you to send money via an email with no service charge if you’re using your debit card. You can receive money without entering a PIN. I use it all the time to divide up restaurant bills among my friends. That said, I found a usability issue yesterday that I wanted to share.
I needed to link my debit card to the app, so I followed their very simple instructions for entry.
The first screen asks for the card number. The number pad is telephone-order rather than number pad-order. This is on a phone, so that makes sense even if I’m much more used to entering these numbers using a keyboard.
Next, the expiration date. On my card, the expiration date is 09/16/2016*, so I start to enter it.
Here is the screen as you start to enter the date:
I then proceeded to enter 09/16 as I looked at my card, then the CCV, and got an error message about an incorrect card number. Tried again. Same. Did this four times before I realized that the expiration date was month/year. It isn’t as though I’d never seen this, or been asked to enter just the month and year from a card, so I thought hard about what tricked me.
I concluded it was the difference between the second and third screens – the guidance is there before you start typing, but as soon as you put in any number for the date, the guidance disappears. Since I was looking down at my card, I just entered what I saw and didn’t think enough to check – especially since it called for ##/##, which matched the month and day on my card, not ##/####, which could only be a month and year.
You are welcome to blame the user for this one, but it would be a small fix to keep the background guide visible during entry.
*No, I’m not dumb enough to put my real card number or expiration date in the pictures for this post. 🙂
The big news in tech last week was the unveiling of the Apple Watch. I think it is a nice moment to discuss a range of human factors topics. (This topic may elicit strong feelings for or against Apple or the idea of a smartwatch but let’s keep it about the science.)
The first is technology adoption/acceptance. Lots of people were probably scratching their heads asking, “who wears a watch, nowadays?” But you do see lots of people wearing fitness bands. Superficially, that contrast seems to demonstrate the Technology Acceptance Model (TAM) in action. TAM is a way to try to understand when people will adopt new technology. It boils down the essential factors to usability (does it seem easy to use?) and usefulness (does it seem like it will help my work or life?).
Fitness bands check both of the above boxes: since they are essentially single-function devices they are relatively easy to use and tracking fitness is perceived as useful for many people.
Back to the Watch, it may also check off both of the above boxes: it certainly appears easy to use (but we do not know yet), and because it has fitness tracking functions plus many others via apps it certainly may be perceived as useful to the same crowd that buys fitness bands.
The next topic that got me excited was the discussion of the so-called digital crown (shown below). Anne and I have previously studied the contrasts between touch screens and rotary knobs for a variety of computing tasks. Having both choices allows the user select the best input device for the task: touch for pushing big on-screen buttons and large-scale movement and knob for precise, linear movement without obscuring the screen. Using a knob is certainly easier than a touch screen if you have shaky hands or are riding a bumpy cab.
Two small items of note that were included in the Watch was the use of the two-finger gesture on the watch face to send a heart beat to another user–the same gesture many people intuitively think of when they want to feel their own heart beat.
Finally, the Watch has the ability to send animated emoij to other users. What was noteworthy is the ability to manipulate both eyes and mouth in emoji characters. I couldn’t find any literature but I recall somewhere that there is some cross-cultural differences in how people use and interpret emoji: Western users tend to focus on the mouth while Eastern users tend to focus on the eyes (if you know what reference I’m talking about or if I’m mis-remembering, feel free to comment).
There’s so much I haven’t brought up (haptic and multi-modal feedback, user interface design, automation, voice input and of course privacy)!
This guest post is from graduate students Haley Vaigneur and Bliss Altenhoff. Haley and Bliss compared the usability of two fitness trackers as part of a graduate course in health informatics taught by Kelly Caine.
Wearable fitness trackers allow users to track and monitor their health. While these devices originated as a way for doctors to monitor chronically ill patients’ vitals, they have recently been developed and marketed for to a more general, health-conscious market. Equipped with advanced sensors such as accelerometers, users’ activity and sleep can be automatically tracked and then compared with their logged fitness goals and daily diet. Users can then use their statistics to help create or maintain a healthier lifestyle. Two examples of such devices are the Jawbone Up and Fitbit Flex, shown above.
Wearable technology is popular and has the potential to dramatically impact health (e.g. long-term health and activity data tracking, immediate syncing with Electronic Health Records (EHRs)). But these benefits can only be realized if the user is able to effectively use and understand these devices. This was the motivation for focusing on two of the most popular models of fitness trackers: the JawBone Up and FitBit Flex and their accompanying smartphone apps.
This study examined the usability of these two devices and their accompanying smartphone apps by having 14 participants (7 for Jawbone Up, 7 for FitBit Flex) perform a think-aloud test on five key features: Setup, Setting Goals, Tracking Diet, Tracking Activity, and Setting an Alarm. Participants then kept the wearable for three days and were encouraged to incorporate it into their normal routine. On the third day, participants completed the System Usability Scale survey and an informal interview regarding their experiences using the wearable.
Some of the key Jawbone UP findings were:
Adding food or drink items was somewhat difficult due to unintuitive organization and unpredictable bugs. For example, one participant attempted to add a food item by scanning the bar code of a Lunchable, but the app added a Dr. Pepper to the log.
Participants struggled to find the alarm settings, with one conducting a general web search for help to understand the Smart Sleep Window settings and how to save alarm settings.
None of the participants were able to figure out how to communicate to the band or app that they would like to begin a workout. They didn’t realize that the Stopwatch menu option was intended to time the workout.
Some of the key findings of the FitBit Flex were:
Participants felt that the wristband (when using the appropriate sized band) was not uncomfortable or revealing and they were proud to wear it because it made them feel healthy.
Users had a difficult time figuring out where to go on the app to set their health goals at first. Their instinct was to find it on the app homepage, or Dashboard, but it was under the Account tab.
Some users had difficulty putting on the wristband, and several noted that it fell off unexpectedly. Users were also confused about where to “tap” the wristband to activate it, based on the instructions given in the app. The picture can appear to instruct the user to tap below the black screen, when the user actually needs to tap the screen directly, and firmly.
Users did not realize that after turning Bluetooth on their phone, they needed to return to the app to tell the phone and wristband to begin syncing. They also noted that leaving Bluetooth on all day drained their phone battery.
Based on time per task and number of errors the FitBit Flex performed better than the Jawbone Up on the five tasks. Users’ ultimate trust in the data, willingness to continue using the wearable, and general satisfaction with each wearable was heavily influenced by their initial experiences (first day). The positive initial think-aloud results for the FitBit Flex were also consistent with a more positive later experience and stronger acceptance of the wearable.
This study found that there is still much room for improvement in the usability of the accompanying smartphone apps. A major concern for these kinds of devices is keeping user interest and motivation, which can easily be lost through confusing or cumbersome designs. By striving to improve the human factors of the apps simultaneous to the capabilities of the actual wearables, there is great potential for greater user satisfaction, and thus more long-term use.
While activity tracking wearables are currently most popular with more tech-savvy, active people, these devices should be designed to be used by all ages and levels of experience users. These devices could change health monitoring drastically and give people the power and ability to make better choices, and live healthier lifestyles.
Haley Vaigneur is a graduate student in Industrial Engineering at Clemson University. Her concentration is Human Factors and Ergonomics, emphasizing on research in the healthcare field.
Bliss Altenhoff is a Doctoral Candidate studying Human Factors Psychology at Clemson University, where she received her M.S. in Applied Psychology in 2012. She is a member of the Perception and Action (PAC) lab, where her research is concentrated on enhancing human perception and performance by enriching perceptual display technologies for laparoscopic surgeons. .
ACKNOWLEDGMENTS This material is based upon work supported by the National Science Foundation under Grant No. 1314342. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
I was reading articles the other day and came across a site that, as many do, reformatted for my phone. Almost all reformatted-for-mobile sites are terrible, but this one is my favorite.
You cannot scroll through the 21 page article by moving your finger up and down, as would happen on a website. The only way to change pages is via the horizontal slider at the bottom. Good luck trying to move it so slightly it only goes forward one page! And yes, moving the slider left and right does move the page up and down.
Earlier this week the United States Department of Transportation released guidelines for automakers designed to reduce the distractibility of in-vehicle technologies (e.g., navigation systems). :
The guidelines include recommendations to limit the time a driver must take his eyes off the road to perform any task to two seconds at a time and twelve seconds total.
The recommendations outlined in the guidelines are consistent with the findings of a new NHTSA naturalistic driving study, The Impact of Hand-Held and Hands-Free Cell Phone Use on Driving Performance and Safety Critical Event Risk. The study showed that visual-manual tasks associated with hand-held phones and other portable devices increased the risk of getting into a crash by three times. [emphasis added]
But a new study (I have not read the paper yet) seems to show that even when you take away the “manual” aspect through voice input, the danger is not mitigated:
The study by the Texas Transportation Institute at Texas A&M University was the first to compare voice-to-text and traditional texting on a handheld device in an actual driving environment.
“In each case, drivers took about twice as long to react as they did when they weren’t texting,” Christine Yager, who headed the study, told Reuters. “Eye contact to the roadway also decreased, no matter which texting method was used.”
My previouspostson using the iPad have become some of the most popular posts on this blog. So I thought I would give you an update on my evolving use of the iPad.
My history of use of the iPad started with great skepticism, moved into curious and active experimentation, and has settled into routine usage. Now, it’s an integrated part of my work and play. I’ve even done what was once unthinkable: nearly wrote a entire manuscript on the iPad without a hardware keyboard! (read on).
With great skepticism I got the original iPad a few months after it was released in 2010. While I could see the theoretical benefits of such a lightweight device, there was not yet much software that was specialized to do any work. In terms of usage, there were probably days that I did not use the iPad. It was primarily relegated to recreational web surfing or curious novelty.
After the release of the iPad 2, however, my usage increased dramatically. The reduction in weight and size, as well as the release of high quality productivity software meant that I not only carried it along with my then-laptop (Fujitsu P1620 ultraportable tablet), I could start to envision how I might start replacing my laptop. Usage was probably split 20 (iPad)/80 (laptop) in terms of mobile computing. It also helped that it was at this time that I switched my desktop computer and laptop to Mac. This made it much more seamless to use Keynote and Pages as replacements for Powerpoint and Word. I’ve kicked Powerpoint but I can’t yet kick Word to the curb.
The iPad 3 again increased usage mainly because of the high resolution display and dramatic speed increase made everything better, especially reading PDFs.
Now, I have an iPad mini and all the software that I’ve mentioned in previous posts are still usable but the form factor has now truly made it even more my primary mobile device of choice over the laptop. The effects of an always-on, super-ultra lightweight device seems to encourage frequent use in places where even a laptop is clunky (e.g., in bed, passenger in a car). I’m currently working on a manuscript and I would estimate that I’ve written more than 50% of it on the iPad mini (using the software keyboard and Pages). Probably another 10% on the iPhone (reading what I wrote, light editing) and the rest on the desktop or laptop computer.
Keynote is an especially capable presentation app. I’ve worked on full presentations created on the iPad (but presented on a laptop). They are whisked silently through the cloud and are on my laptop/desktop waiting for me.
But there are other things that are making the iPad work especially well for me. One feature that isn’t discussed a great deal in reviews is iCloud. iCloud, in contrast to Dropbox, invisibly keeps my Keynote (class lectures, professional presentations) and Pages (manuscripts) in sync on all my devices (desktop, laptop, iPad mini, and iPhone). I still use Dropbox but iCloud is simpler model with less thinking about spatial file organization (the file is just in the app). I still use Dropbox but treat it like an archive; a folder with many levels of folders. While I treat iCloud as an active area for current work, a work space. iCloud = short term memory, dropbox = long term memory. This setup works quite well for me.
Uses will be different for different people but for me (someone who values portability above all else and is a tinkerer) the Mini is a winner (it replaced my iPad 3). I also did not set unrealistic expectations of the device which may be why I’m so surprised how much of my daily computing can be addressed with such a relatively low-powered device. The size/weight of the Mini simply overwhelms any other benefit of the larger iPads. When I travel, I am now more likely to be carrying just the iPad (with no laptop unless I know i’ll need to program or do statistical analysis). In the end, it allows me to do a small amount of things in more places than at my desk.
To conclude, my most frequently used apps lately are:
Keynote (lecture and presentation creation & editing)
Keynote and Papers are truly exceptional apps that have nearly the full functionality of their desktop counterparts without replicating the same interaction style (i.e., they are optimized for tablets). I actually prefer doing lit searches in the iOS version of papers than using the desktop version!
This list is short because everything else is for fun!
In the newest post, Dr. Jeff Lawley discusses the usability of a DSM Reference app from Kitty CAT Psych. For those who didn’t take intro psych in college, the DSM is the Diagnostic and Statistical Manual, which classifies symptoms into disorders. It’s interesting to read an expert take on this app – he considers attributes I would not have thought of, such as whether the app retains information (privacy issues).
As Dr. Lawley notes on his “about” page, there are few apps designed for mental health professionals and even fewer evaluations of these apps. Hopefully his blog can fill that niche and inspire designers to create more mobile tools for these professionals.
I recently published a study (conducted last year) on automation trust and dependence. In that study, we pseudo-wizard-of-oz’ed a smartphone app that would help diabetics manage their condition.
We had to fake it because there was no such app and it would be to onerous to program it (and we weren’t necessarily interested in the app, just a form of advanced, non-existent automation).
Now, that app is real. I had nothing to do with it but there are now apps that can help diabetics manage their condition. This NYT article discusses the complex area of healthcare apps:
Smartphone apps already fill the roles of television remotes, bike speedometers and flashlights. Soon they may also act as medical devices, helping patients monitor their heart rate or manage their diabetes, and be paid for by insurance.
The idea of medically prescribed apps excites some people in the health care industry, who see them as a starting point for even more sophisticated applications that might otherwise never be built. But first, a range of issues — around vetting, paying for and monitoring the proper use of such apps — needs to be worked out.
The focus of the article is on regulatory hurdles while our focus (in the paper) was how potential patients might accept and react to advice given by a smartphone app.
The University of Ottawa is considering a proposal which would give its professors the power to ban laptops and other electronic devices in the classroom.
Professors say everything from texting to time on Facebook is allowing their students to do everything but learn.
“They are distracted and we are competing with that for their attention,” says University of Ottawa professor Marcel Turcotte who voted in favour of the policy.
“You see one student who is really not listening, would be watching the video and then it’s kind of contagious,” says Turcotte.
As a professor, I see my share of this as well. Every classroom has wireless and it’s just too tempting to browse Facebook and other non-relevant sites while in class. A student once told me that they are distracted by OTHER people’s laptops when that other student is watching Youtube or browsing Facebook: secondhand distraction.
I happen to see more phone texting in my classes. <begin RANT>My opinion is that there is nothing special about a laptop where it deserves special treatment over any other technology (it’s not a magical note-taking tool). If we take a more critical analysis of what the students and administrator say in the article:
But many students say they learn better with a laptop and the vice president of the university’s student federation says it’s an important tool.
What does that mean? “Learn better”? How do they know? And what does “important tool” mean? Again, it’s just a word processor; not a magical note-taking tool. It’s attitudes and implicit assumptions like this (more specifically, a blind, unquestioning trust that the simple PRESENCE of a high technology tool will inevitably lead to better outcomes; it HAS to, it’s HIGH TECH!) that’s a major problem. It’s marketing speak by companies who want to sell and integrate very expensive technology into our cars, classrooms, phones, and offices and administrators just eat it up. What problem is being solved? <end RANT>
With the introduction of “the new iPad” (i.e., iPad 3) I thought it would be a good time to update one of the most popular posts on this blog. That post was about incorporating an iPad into my daily work and play routine. It was written when the iPad was first introduced in 2010 and was mostly an exploration of some initial impressions and app suggestions from the perspective of an academic (non-student, higher education).
Based on the incredible popularity of that and the updated post it’s clear that many academics would like to incorporate the iPad into their workflow. My work is probably very similar to a generic office worker: lots of reading (mostly scanned journal article PDFs, writing, light note-taking, presentations, & data analysis.
In the years since I got first got the iPad, I’ve slowly learned what tasks can best be accomplished with the iPad and which should be left to the computer. I’ve also downloaded and deleted a large variety of apps whittling down until I find one (or three) that works best.
I’ve also since moved on to the iPad 2. It was a nice upgrade because it was dramatically thinner and lighter than the original iPad which made holding it more comfortable. The increased speed also made reading the scanned PDFs more pleasant. This is why I can’t wait for the iPad 3: more speed and higher resolution screen will significantly affect my most frequent tasks (see below).
This post is organized around my common work tasks and the apps I use most frequently. I don’t discuss the built in mail program, calendar, or web browser (which are heavily used).
Most of my library of thousands of PDFs are scanned journal articles. A smaller but growing portion of the newer articles are non-scanned PDFs that were created by the publisher. The difference is that the scanned PDFs are usually bigger and slightly fuzzier.
My original suggested app was iAnnotate mainly because of its ability to directly annotate PDFs with notes and scribbles. But I kept Goodreader for just plain reading because it seemed faster and more intuitive. Fortunately, Goodreader has kept improving and it’s now my most-used PDF application. The best feature is integration with Dropbox; so I only have to point it to a folder to download a semester’s worth of PDFs.
As good as Goodreader is, there are times when I need to move between PDF pages quickly and would like an alternative to page flipping. In that case I use PDF Expert since it has a nice birds-eye view of 9 pages but it just seems slower in page rendering.
I still use the iPad for light note-taking in meetings or by myself. I find it sufficient for most of my needs especially if you add a few accessories. In my previous post, I mentioned Evernote. I don’t really actively use Evernote much anymore. I can’t quite put my finger on it yet but it’s just not the right app/service for me. I notice that I tend to just dump things into it that I think i’ll need later but end up not needing.
Instead, I use a few note taking tools; none of which are preferred yet. The software keyboard is still sufficient for 80% of my needs. I’m able to type relatively fast and error free. For typewritten notes, I’ll use the built-in Notes application (which syncs to cloud services).
When I’m traveling light (and I always am) but I know i’ll need to type out some e-mails or do some other writing, a great hardware accessory is the low-cost Amazon Bluetooth keyboard. It’s only about $35 (half the price of the metal Apple-branded accessory keyboard) and has a relatively nice feel for such a small keyboard. The great thing is that I only take it when I REALLY want a hardware keyboard which is not all the time.
On the rare occasion that I need to capture handwriting I don’t have a favorite app; instead there are 2 or 3 that each have something the others do not. As an aside, some people think they want hand writing but I’m not one of them. My handwriting is horribly mangled and unreadable unless I concentrate. Plus, handwritten notes are not usually text-searchable.
First, my usual app is called Notes Plus. It recently underwent a major upgrade with some pretty amazing features like split-screen viewing of a web page while you take notes and audio recording:
But I really hate the silver/metal look. I sometimes alternate and use Ghostwriter for handwritten notes or if I need to make a drawing:
Both of these applications export their notes into Evernote, Dropbox, or plain PDFs. When I am handwriting (again, which is probably less than 5% of the time) I use a cheap stylus from Amazon.
Finally, I’ve been editing presentations more on the iPad since switching to the Keynote presentation app on my desktop. When I need to organize my lectures or work on a presentation, the Keynote iPad app is surprisingly powerful but easy to use. I’m amazed that so much functionality could be built into a touch-only app:
I still use my laptop to actually give the presentation because I like to view the upcoming slide and the iPad currently just mirrors the current slide. I also use in-class clickers which require a laptop.
Other Useful Utilities
Finally, there are a few add-ons or apps that I find useful. The first is Wikipanion (yes, it’s OK to use Wikipedia). Wikipanion is a nice app front end to Wikipedia:
The second, Offline Pages, is an app that allows you to download full web pages or websites for off-line viewing (e.g., on a plane).
Finally, there are times when you want to send a link or snippet of text from your desktop computer to your iPad. A useful app/service is Prowl. When you sign up for and then install the Prowl app and browser extension, you can send links directly from your browser to your iPad.
Another bonus is that once you sign up for the Prowl service and install an app on your desktop computer, you can also send text snippets from anywhere on your computer (e.g., a telephone number, address, paragraph of text) to your iPad.
What I Don’t/Can’t Do
Based on the number of hits the iPad posts have received from the following search term: “SPSS and iPad” there seems to be a bit of a demand…are you listening IBM?
To be honest, I don’t know if I want to be analyzing data on the iPad anyway. However, most data analysis is pointing and clicking so knows; who maybe some creative developer will create a data analysis application perfectly suited to a touch only interface.
I do a fair amount of programming and it would just be unbearable to do that on an iPad.