Category Archives: technology

Tesla counterpoint: “40% reduction in crashes” with introduction of Autosteer

I posted yesterday about the challenges of fully autonomous cars and cars that approach autonomy. Today I bring you a story about the successes of semi-automatic features in automobiles.

Tesla has a feature called Autopilot that assists the driver without being completely autonomous. Autopilot includes car-controlled actions such as collision warnings, automatic emergency braking, and automatic lane keeping. Tesla classifies the Autopilot features as Level 2 automation. (Level 5 is considered fully autonomous). Rich has already given our thoughts about calling this Autopilot in a previous post. One particular feature is called AutoSteer, described in the NHTSA report as:

The Tesla Autosteer system uses information from the forward-looking camera, the radar sensor, and the ultrasonic sensors, to detect lane markings and the presence of vehicles and objects to provide automated lane-centering steering control based on the lane markings and the vehicle directly in front of the Tesla, if present. The Tesla owner’s manual contains the following warnings: 1) “Autosteer is intended for use only on highways and limited-access roads with a fully attentive driver. When using Autosteer, hold the steering wheel and be mindful of road conditions and surrounding traffic. Do not use Autosteer on city streets, in construction zones, or in areas where bicyclists or pedestrians may be present. Never depend on Autosteer to determine an appropriate driving path. Always be prepared to take immediate action. Failure to follow these instructions could cause serious property damage, injury or death;” and 2) “Many unforeseen circumstances can impair the operation of Autosteer. Always keep this in mind and remember that as a result, Autosteer may not steer Model S appropriately. Always drive attentively and be prepared to take immediate action.” The system does not prevent operation on any road types.

An NHTSA report looking into a fatal Tesla crash also noted that the introduction of Autosteer corresponded to a 40% reduction in automobile crashes. That’s a lot considering Dr. Gill Pratt from Toyota said he might be happy with a 1% change.

Autopilot was enabled in October, 2015, so there has been a good period of time for post-autopilot crash data to be generated.

Toyota Gets It: Self-driving cars depend more on people than on engineering

I recommend reading this interview with Toyota’s Dr. Gill Pratt in its entirety. He discusses pont-by-point the challenges of a self-driving car that we consider in human factors, but don’t hear much about in the media. For example:

  • Definitions of autonomy vary. True autonomy is far away. He gives the example of a car performing well on an interstate or in light traffic compared to driving through the center of Rome during rush hour.
  • Automation will fail. And the less it fails, the less prepared the driver is to assume control.
  • Emotionally we cannot accept autonomous cars that kill people, even if it reduces overall crash rates and saves lives in the long run.
  • It is difficult to run simulations with the autonomous cars that capture the extreme variability of the human drivers in other cars.

I’ll leave you with the last paragraph in the interview as a summary:

So to sum this thing up, I think there’s a general desire from the technical people in this field to have both the press and particularly the public better educated about what’s really going on. It’s very easy to get misunderstandings based on words like or phrases like “full autonomy.” What does full actually mean? This actually matters a lot: The idea that only the chauffeur mode of autonomy, where the car drives for you, that that’s the only way to make the car safer and to save lives, that’s just false. And it’s important to not say, “We want to save lives therefore we have to have driverless cars.” In particular, there are tremendous numbers of ways to support a human driver and to give them a kind of blunder prevention device which sits there, inactive most of the time, and every once in a while, will first warn and then, if necessary, intervene and take control. The system doesn’t need to be competent at everything all of the time. It needs to only handle the worst cases.

Thoughtful and Fun Interfaces in the Reykjavik City Museum

I stopped over in Iceland on the way to a conference and popped in to the Reykjavik City Museum, not knowing what I’d find. I love the idea of technology in a museum, but I’m usually disappointed. Either the concepts are bad, the technology is silly (press a button, light some text), or it just doesn’t work, beaten into submission by armies of 4-year-olds.

Not at the Settlement Exhibit in Reykjavik. There are two unique interfaces I want to cover, but I’ll start at the beginning with a more typical touchscreen that controlled a larger wall display. As you enter the museum, there are multiple stations for reading pages of the Sagas. These are the stories of their history, from the 9th to 11th centuries, and beautifully illustrated.
They have been scanned, so you can browse the pages (with translations) and not damage them. I didn’t have all day to spend there, but after starting some of the Sagas, I wished I had.

Further in you see the reason for the location: the excavation of the oldest known structure in Iceland, a longhouse, is in the museum! Around it are typical displays with text and audio, explaining the structure and what life was like at that time.

Then I moved into a smaller dark room with an attractive lit podium (see video below). You could touch it, and it controlled the large display on the wall. The display showed the longhouse as a 3-D virtual reconstruction. As you moved your finger around the circles on the podium, the camera rotated so you could get a good look at all parts of the longhouse. As you moved between circles, a short audio would play to introduce you to the next section. Each circle controlled the longhouse display, but the closer to the center the more “inside” the structure you can see. Fortunately, I found someone else made a better video of the interaction than I did:

The last display was simple, but took planning and thought. Near the exit was a large table display of the longhouse. It was also a touch interface, where you could put your hand on the table to activate information about how parts of the house were used. Think of the challenges: when I was there, it was surrounded by 10 people, all touching it at once. We were all looking for information in different languages. It has to be low enough for everyone to see, but not so low it’s hard to touch. Overall, they did a great job.

Be sure to do a stopover if you cross the Atlantic!

Both videos come from Alex Martire on YouTube.

Tesla is wrong to use “autopilot” term

Self driving cars are a hot topic!   See this Wikipedia page on Autonomous cars for a short primer.  This post is mainly a bit of exploration of how the technology is presented to the user.

Tesla markets their self driving technology using the term “Autopilot”.  The German government is apparently unhappy with the use of that term because it could be misleading (LA Times):

Germany’s transport minister told Tesla to cease using the Autopilot name to market its cars in that country, under the theory that the name suggests the cars can drive themselves without driver attention, the news agency Reuters reported Sunday.

Tesla wants to be perceived as first to market with a fully autonomous car (using the term Autopilot) yet they stress that it is only a driver assistance system and that the driver is meant to stay vigilant.  But I do not think term Autopilot is perceived that way by most lay people.  It encourages an unrealistic expectation and may lead to uncritical usage and acceptance of the technology, or complacency.

Complacency can be described and manifested as:

  • too much trust in the automation (more than warranted)
  • allocation of attention to other things and not monitoring the proper functioning of automation
  • over-reliance on the automation (letting it carry out too much of the task)
  • reduced awareness of one’s surroundings (situation awareness)

Complacency is especially dangerous when unexpected situations occur and the driver must resume manual control.  The non-profit Consumer Reports says:

“By marketing their feature as ‘Autopilot,’ Tesla gives consumers a false sense of security,” says Laura MacCleery, vice president of consumer policy and mobilization for Consumer Reports. “In the long run, advanced active safety technologies in vehicles could make our roads safer. But today, we’re deeply concerned that consumers are being sold a pile of promises about unproven technology. ‘Autopilot’ can’t actually drive the car, yet it allows consumers to have their hands off the steering wheel for minutes at a time. Tesla should disable automatic steering in its cars until it updates the program to verify that the driver’s hands are on the wheel.”

Companies must commit immediately to name automated features with descriptive—not exaggerated—titles, MacCleery adds, noting that automakers should roll out new features only when they’re certain they are safe.

Tesla responded that:

“We have great faith in our German customers and are not aware of any who have misunderstood the meaning, but would be happy to conduct a survey to assess this.”

But Tesla is doing a disservice by marketing their system using the term AutoPilot and by selectively releasing video of the system performing flawlessly:

Using terms such as Autopilot, or releasing videos of near perfect instances of the technology will only hasten the likelihood of driver complacency.

But no matter how they are marketed, these systems are just machines that rely on high quality sensor input (radar, cameras, etc).  Sensors can fail, GPS data can be old, or situations can change quickly and dramatically (particularly on the road).  The system WILL make a mistake–and on the road, the cost of that single mistake can be deadly.

Parasuraman and colleagues have heavily researched how humans behave when exposed to highly reliable automation in the context of flight automation/autopilot systems.  In a classic study, they first induced a sense of complacency by exposing participants to highly reliable automation.  Later,  when the automation failed, the more complacent participants were much worse at detecting the failure (Parasuraman, Molloy, & Singh, 1993).

Interestingly, when researchers examined very autonomous autopilot systems in aircraft, they found that pilots were often confused or distrustful of the automation’s decisions (e.g., initiating course corrections without any pilot input) suggesting LOW complacency.  But it is important to note that pilots are highly trained, and have probably not been subjected to the same degree of effusively positive marketing that the public is being subjected regarding the benefits of self-driving technology.  Tesla, in essence, tells drivers to “trust us“, further increasing the likelihood of driver complacency:

We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. Eight surround cameras provide 360 degree visibility around the car at up to 250 meters of range. Twelve updated ultrasonic sensors complement this vision, allowing for detection of both hard and soft objects at nearly twice the distance of the prior system. A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength, capable of seeing through heavy rain, fog, dust and even the car ahead.

To make sense of all of this data, a new onboard computer with more than 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software. Together, this system provides a view of the world that a driver alone cannot access, seeing in every direction simultaneously and on wavelengths that go far beyond the human senses.


Parasuraman, R., & Molloy, R. (1993). Performance consequences of automation-induced“complacency.” International Journal of Aviation Psychology, 3(1), 1-23.

Some other key readings on complacency:

Parasuraman, R. (2000). Designing automation for human use: empirical studies and quantitative models. Ergonomics, 43(7), 931–951.

Parasuraman, R., & Wickens, C. D. (2008). Humans: Still vital after all these years of automation. Human Factors, 50(3), 511–520.

Parasuraman, R., Manzey, D. H., & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381–410.


Three months with Apple Watch

My watch face of the moment

First, a disclaimer: this isn’t a full-on review of the Watch. There are more qualified people to review the gadget.  The newest one comes from one of the best and most thorough hardware review sites: Anandtech.

One part of the review was particularly insightful:

Overall, I found that the fitness component of this watch to be a real surprise. I often hear that Apple is good at making things we didn’t know we wanted, but this is probably the first time I’ve really believed that statement. Going into the review, I didn’t really realize that I wanted a solid fitness tracker on a smartwatch, but now I’m really convinced that there is value to such features.

This has been my experience as well.  I’ve never cared to wear a fitness tracker but i’m surprised at how much I pore over the stats of my standing, activity, and workout levels.  The watch also provides a surprisingly effective level of motivation (badges & activity circles).

My activity level (for someone who sits at a desk most of the time) has dramatically increased since the watch (see right; yellow line is when I got the watch).IMG_0369

We used to think that smartphones were the “ubiquitous” technology but there are times I leave it behind.  The watch is always-on and there will be interesting use-cases and challenges in the future.  I’d love to start my car with my watch!

Some other random thoughts:

  • The fitness features are great but I wish there was a better way to view my data:
    • View splits on outdoor runs
    • View all my workouts instead of looking for them in the calendar view.
  • Many reviews i’ve read assume the watch will replace the phone.  But doing any extended activity really tires the shoulders!  My interactions are really limited to much less than 5-10 seconds.
  • I notice that haptic feedback on the wrist is much less jarring and easier to dismiss (i.e., not as disruptive) as vibrating phones on the body.
  • The Apple Watch is made for travel:
    • Most airlines have applets for the watch that make it so easy to keep track of gates, departures, & arrivals.
    • Boarding a plane with your watch feels very futuristic but most pass readers are on the right side and I wear my watch on the left resulting in very awkward wrist positions.  Even when the reader was on the left, it is facing upwards requiring me to turn my wrist downwards.
  • It is unobtrusive and looks like a watch, not like a gizmo on my wrist.
  • Apple Pay belongs on the watch.  I’ve used Apple Pay on my phone but it is much more seamless on the watch.
  • Notifications are great if you pare down what can notify you.  I only get notified of VIP mail (select senders) and text messages.
  • Controlling my thermostat, and other electrical devices from my wrist is pretty great.

Apple Watch Human Factors

watchThe big news in tech last week was the unveiling of the Apple Watch. I think it is a nice moment to discuss a range of human factors topics. (This topic may elicit strong feelings for or against Apple or the idea of a smartwatch but let’s keep it about the science.)

The first is technology adoption/acceptance. Lots of people were probably scratching their heads asking, “who wears a watch, nowadays?” But you do see lots of people wearing fitness bands. Superficially, that contrast seems to demonstrate the Technology Acceptance Model (TAM) in action.  TAM is a way to try to understand when people will adopt new technology. It boils down the essential factors to usability (does it seem easy to use?) and usefulness (does it seem like it will help my work or life?).

Fitness bands check both of the above boxes: since they are essentially single-function devices they are relatively easy to use and tracking fitness is perceived as useful for many people.

Back to the Watch, it may also check off both of the above boxes: it certainly appears easy to use (but we do not know yet), and because it has fitness tracking functions plus many others via apps it certainly may be perceived as useful to the same crowd that buys fitness bands.

The next topic that got me excited was the discussion of the so-called digital crown (shown below). Anne and I have previously studied the contrasts between touch screens and rotary knobs for a variety of computing tasks. Having both choices allows the user select the best input device for the task: touch for pushing big on-screen buttons and large-scale movement and knob for precise, linear movement without obscuring the screen. Using a knob is certainly easier than a touch screen if you have shaky hands or are riding a bumpy cab.


Two small items of note that were included in the Watch was the use of the two-finger gesture on the watch face to send a heart beat to another user–the same gesture many people intuitively think of when they want to feel their own heart beat.

Finally, the Watch has the ability to send animated emoij to other users. What was noteworthy is the ability to manipulate both eyes and mouth in emoji characters. I couldn’t find any literature but I recall somewhere that there is some cross-cultural differences in how people use and interpret emoji: Western users tend to focus on the mouth while Eastern users tend to focus on the eyes (if you know what reference I’m talking about or if I’m mis-remembering, feel free to comment).



There’s so much I haven’t brought up (haptic and multi-modal feedback, user interface design, automation, voice input and of course privacy)!



Wearable Fitness Trackers: A Comparative Usability Evaluation

This guest post is from graduate students Haley Vaigneur and Bliss Altenhoff. Haley and Bliss compared the usability of two fitness trackers as part of a graduate course in health informatics taught by Kelly Caine.


Wearable fitness trackers allow users to track and monitor their health. While these devices originated as a way for doctors to monitor chronically ill patients’ vitals, they have recently been developed and marketed for to a more general, health-conscious market. Equipped with advanced sensors such as accelerometers, users’ activity and sleep can be automatically tracked and then compared with their logged fitness goals and daily diet. Users can then use their statistics to help create or maintain a healthier lifestyle. Two examples of such devices are the Jawbone Up and Fitbit Flex, shown above.

Wearable technology is popular and has the potential to dramatically impact health (e.g. long-term health and activity data tracking, immediate syncing with Electronic Health Records (EHRs)). But these benefits can only be realized if the user is able to effectively use and understand these devices. This was the motivation for focusing on two of the most popular models of fitness trackers: the JawBone Up and FitBit Flex and their accompanying smartphone apps.

This study examined the usability of these two devices and their accompanying smartphone apps by having 14 participants (7 for Jawbone Up, 7 for FitBit Flex) perform a think-aloud test on five key features: Setup, Setting Goals, Tracking Diet, Tracking Activity, and Setting an Alarm. Participants then kept the wearable for three days and were encouraged to incorporate it into their normal routine. On the third day, participants completed the System Usability Scale survey and an informal interview regarding their experiences using the wearable.

Some of the key Jawbone UP findings were:

  1. Adding food or drink items was somewhat difficult due to unintuitive organization and unpredictable bugs. For example, one participant attempted to add a food item by scanning the bar code of a Lunchable, but the app added a Dr. Pepper to the log.
  2. Participants struggled to find the alarm settings, with one conducting a general web search for help to understand the Smart Sleep Window settings and how to save alarm settings.
  3. None of the participants were able to figure out how to communicate to the band or app that they would like to begin a workout. They didn’t realize that the Stopwatch menu option was intended to time the workout.

Some of the key findings of the FitBit Flex were:

Setting goals
Setting goals
What do I tap?
  1. Participants felt that the wristband (when using the appropriate sized band) was not uncomfortable or revealing and they were proud to wear it because it made them feel healthy.
  2. Users had a difficult time figuring out where to go on the app to set their health goals at first. Their instinct was to find it on the app homepage, or Dashboard, but it was under the Account tab.
  3. Some users had difficulty putting on the wristband, and several noted that it fell off unexpectedly. Users were also confused about where to “tap” the wristband to activate it, based on the instructions given in the app. The picture can appear to instruct the user to tap below the black screen, when the user actually needs to tap the screen directly, and firmly.
  4. Users did not realize that after turning Bluetooth on their phone, they needed to return to the app to tell the phone and wristband to begin syncing. They also noted that leaving Bluetooth on all day drained their phone battery.

    Bluetooth confusion

Based on time per task and number of errors the FitBit Flex performed better than the Jawbone Up on the five tasks. Users’ ultimate trust in the data, willingness to continue using the wearable, and general satisfaction with each wearable was heavily influenced by their initial experiences (first day). The positive initial think-aloud results for the FitBit Flex were also consistent with a more positive later experience and stronger acceptance of the wearable.

This study found that there is still much room for improvement in the usability of the accompanying smartphone apps. A major concern for these kinds of devices is keeping user interest and motivation, which can easily be lost through confusing or cumbersome designs. By striving to improve the human factors of the apps simultaneous to the capabilities of the actual wearables, there is great potential for greater user satisfaction, and thus more long-term use.

While activity tracking wearables are currently most popular with more tech-savvy, active people, these devices should be designed to be used by all ages and levels of experience users. These devices could change health monitoring drastically and give people the power and ability to make better choices, and live healthier lifestyles.

Haley Vaigneur is a graduate student in Industrial Engineering at Clemson University. Her concentration is Human Factors and Ergonomics, emphasizing on research in the healthcare field.

Bliss Altenhoff is a Doctoral Candidate studying Human Factors Psychology at Clemson University, where she received her M.S. in Applied Psychology in 2012.  She is a member of the Perception and Action (PAC) lab, where her research is concentrated on enhancing human perception and performance by enriching perceptual display technologies for laparoscopic surgeons. .

This material is based upon work supported by the National Science Foundation under Grant No. 1314342. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Interesting control/display

Anne sent me an example of, “why haven’t they thought of this before”: an air vent with the temperature display and control knob all in one.

In this article describing the new Audi TT with glass dashboard, they describe the novel control/display/air vent seen in the image above. I guess one problem here is if it is accessible to only the driver or if it’s centrally located.


The dashboard (shown in the linked article), however, is another story. While it looks futuristic, it looks like a distraction nightmare!

The Future Newsroom (bad touchscreen lag)


This clip of Fox News’ new studio has been tearing up the internet. But what caught my eye was the touchscreen lag and general unresponsiveness/accidental touches of the users in the background (see image at top; video here). Starting at the 10 second mark, note the user on the right.