All posts by Richard Pak

Associate Professor at Clemson University/Department of Psychology

Tesla is wrong to use “autopilot” term

Self driving cars are a hot topic!   See this Wikipedia page on Autonomous cars for a short primer.  This post is mainly a bit of exploration of how the technology is presented to the user.

Tesla markets their self driving technology using the term “Autopilot”.  The German government is apparently unhappy with the use of that term because it could be misleading (LA Times):

Germany’s transport minister told Tesla to cease using the Autopilot name to market its cars in that country, under the theory that the name suggests the cars can drive themselves without driver attention, the news agency Reuters reported Sunday.

Tesla wants to be perceived as first to market with a fully autonomous car (using the term Autopilot) yet they stress that it is only a driver assistance system and that the driver is meant to stay vigilant.  But I do not think term Autopilot is perceived that way by most lay people.  It encourages an unrealistic expectation and may lead to uncritical usage and acceptance of the technology, or complacency.

Complacency can be described and manifested as:

  • too much trust in the automation (more than warranted)
  • allocation of attention to other things and not monitoring the proper functioning of automation
  • over-reliance on the automation (letting it carry out too much of the task)
  • reduced awareness of one’s surroundings (situation awareness)

Complacency is especially dangerous when unexpected situations occur and the driver must resume manual control.  The non-profit Consumer Reports says:

“By marketing their feature as ‘Autopilot,’ Tesla gives consumers a false sense of security,” says Laura MacCleery, vice president of consumer policy and mobilization for Consumer Reports. “In the long run, advanced active safety technologies in vehicles could make our roads safer. But today, we’re deeply concerned that consumers are being sold a pile of promises about unproven technology. ‘Autopilot’ can’t actually drive the car, yet it allows consumers to have their hands off the steering wheel for minutes at a time. Tesla should disable automatic steering in its cars until it updates the program to verify that the driver’s hands are on the wheel.”

Companies must commit immediately to name automated features with descriptive—not exaggerated—titles, MacCleery adds, noting that automakers should roll out new features only when they’re certain they are safe.

Tesla responded that:

“We have great faith in our German customers and are not aware of any who have misunderstood the meaning, but would be happy to conduct a survey to assess this.”

But Tesla is doing a disservice by marketing their system using the term AutoPilot and by selectively releasing video of the system performing flawlessly:

Using terms such as Autopilot, or releasing videos of near perfect instances of the technology will only hasten the likelihood of driver complacency.

But no matter how they are marketed, these systems are just machines that rely on high quality sensor input (radar, cameras, etc).  Sensors can fail, GPS data can be old, or situations can change quickly and dramatically (particularly on the road).  The system WILL make a mistake–and on the road, the cost of that single mistake can be deadly.

Parasuraman and colleagues have heavily researched how humans behave when exposed to highly reliable automation in the context of flight automation/autopilot systems.  In a classic study, they first induced a sense of complacency by exposing participants to highly reliable automation.  Later,  when the automation failed, the more complacent participants were much worse at detecting the failure (Parasuraman, Molloy, & Singh, 1993).

Interestingly, when researchers examined very autonomous autopilot systems in aircraft, they found that pilots were often confused or distrustful of the automation’s decisions (e.g., initiating course corrections without any pilot input) suggesting LOW complacency.  But it is important to note that pilots are highly trained, and have probably not been subjected to the same degree of effusively positive marketing that the public is being subjected regarding the benefits of self-driving technology.  Tesla, in essence, tells drivers to “trust us“, further increasing the likelihood of driver complacency:

We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. Eight surround cameras provide 360 degree visibility around the car at up to 250 meters of range. Twelve updated ultrasonic sensors complement this vision, allowing for detection of both hard and soft objects at nearly twice the distance of the prior system. A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength, capable of seeing through heavy rain, fog, dust and even the car ahead.

To make sense of all of this data, a new onboard computer with more than 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software. Together, this system provides a view of the world that a driver alone cannot access, seeing in every direction simultaneously and on wavelengths that go far beyond the human senses.

References

Parasuraman, R., & Molloy, R. (1993). Performance consequences of automation-induced“complacency.” International Journal of Aviation Psychology, 3(1), 1-23.

Some other key readings on complacency:

Parasuraman, R. (2000). Designing automation for human use: empirical studies and quantitative models. Ergonomics, 43(7), 931–951. http://doi.org/10.1080/001401300409125

Parasuraman, R., & Wickens, C. D. (2008). Humans: Still vital after all these years of automation. Human Factors, 50(3), 511–520. http://doi.org/10.1518/001872008X312198

Parasuraman, R., Manzey, D. H., & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381–410. http://doi.org/10.1177/0018720810376055

 

Human Factors Potpourri

Some recent items in the news with a human factors angle:

  • What happened to Google Maps?  Interesting comparison of Google Maps from 2010/2016 by designer/cartographer Justin O’Beirne.
  • India will use 3D paintings to slow down drivers.  Excellent use of optical illusions for road safety.
  • Death by GPS.  GPS mis-routing is the easiest and most relatable example of human-automaiton interaction.  Unfortunately, to its detriment, this article does not discuss the automation literature, instead focusing on more basic processes that, I think, are less relevant.

So you want to go to graduate school in Human Factors?

This is the first post in an upcoming series about human factors graduate school.

If you have decided that you might want to further your education in human factors and ergonomics by going to graduate school, here is some useful information that Anne and I have collected over the years.  While there are many sources of similar information, this one is tailored to potential HF students and answers questions that we’ve received.

First, graduate school will be very different from undergraduate.  Yes, you take classes, but the most important experience is in conducting research–that is how you will be evaluated and ultimately what determines whether you are successful.

Most prospective students in HF are interested in the topic because they are interested in design or usability.  It is important to realize that graduate school will not be like working in a design studio.  Instead, it will be more like being in an experimental psychology program where you take courses in statistics, research methods, cognition, perception, etc.  

You will also take specialized courses in usability or other evaluation methods but it will be one of many.  The goal is to educate you on the fundamentals of human capabilities and limitations so that you can then use this knowledge in the design or evaluation of artifacts (for those going into applied fields).

In the rest of this series, we’ll discuss researching programs, contacting faculty, and various dos and don’ts.

Prominent figures warn of dangerous Artificial Intelligence (it’s probably a bad HF idea too)

Recently, some very prominent scientists and other figures have warned of the consequences of autonomous weapons, or more generally artificial intelligence run amok.

The field of artificial intelligence is obviously a computational and engineering problem: designing a machine (i.e., robot) or software that can emulate thinking to a high degree.   But eventually, any AI must interact with a human either by taking control of a situation from a human (e.g., flying a plane) or suggesting courses of action to a human.

I thought this recent news item about potentially dangerous AI might be a great segue to another discussion of human-automation interaction.  Specifically, to a detail that does not frequently get discussed in splashy news articles or by non-human-factors people:  degree of automation. This blog post is heavily informed by a proceedings paper by Wickens, Li, Santamaria, Sebok, and Sarter (2010).

First, to HF researchers, automation is a generic term that encompasses anything that carries out a task that was once done by a human.  Such as robotic assembly, medical diagnostic aids, digital camera scene modes, and even hypothetical autonomous weapons with AI.  These disparate examples simply differ in degree of automation.

Let’s back up for a bit: Automation can be characterized by two independent dimensions:

  • STAGE or TYPE:  What is it doing and how is it doing it?
  • LEVEL: How much it is doing?

Stage/Type of automation describes the WHAT tasks are being automated and sometimes how.  Is the task perceptual, like enhancing vision at night or amplifying certain sounds?  Or is the automation carrying out a task that is more cognitive, like generating the three best ways to get to your destination in the least amount of time?

The second dimension, Level, refers to the balance of tasks shared between the automation and the human; is the automation doing a tiny bit of the task and then leaving the rest to the user?  Or is the automation acting completely on its own with no input from the operator (or ability to override)?

If you imagine STAGE/TYPE (BLUE/GREEN) and LEVEL (RED) as the X and Y of a chart (below), it becomes clearer how various everyday examples of automation fit into the scheme.  As LEVEL and/or TYPE increase, we get a higher degree of automation (dotted line).

Degrees of automation (Adapted from Wickens et al., 2010)
Degrees of automation represented as the dotted line (Adapted from Wickens et al., 2010)

Mainstream discussions of AI and its potential dangers seem to be focusing on a hypothetical ultra-high degree of automation.  A hypothetical weapon that will, on its own, determine threats and act.  There are actually very few examples of such a high level of automation in everyday life because cutting the human completely “out of the loop” can have severely negative human performance consequences.

The figure below shows some examples of automation and where they fit into the scheme:

Approximate degrees of automation of everyday examples of automation
Approximate degrees of automation of everyday examples of automation

Wickens et al., (2010) use the phrase, “the higher they are, the farther they fall.”   This means that when humans interact with greater degrees of automation, they do fine if it works correctly, but will encounter catastrophic consequences when automation fails (and it always will at some point).  Why?  Users get complacent with high DOA automation, they forget how to do the task themselves, or they loose track of what was going on before the automation failed and thus cannot recover from the failure so easily.

You may have experienced a mild form of this if your car has a rear-backup camera.  Have you ever rented a car without one?  How do you feel? That feelings tends to get magnified with higher degrees of automation.

So, highly autonomous weapons (or any high degree of automation) is not only a philosophically bad/evil idea, it is bad for human performance!

 

Three months with Apple Watch

IMG_0371
My watch face of the moment

First, a disclaimer: this isn’t a full-on review of the Watch. There are more qualified people to review the gadget.  The newest one comes from one of the best and most thorough hardware review sites: Anandtech.

One part of the review was particularly insightful:

Overall, I found that the fitness component of this watch to be a real surprise. I often hear that Apple is good at making things we didn’t know we wanted, but this is probably the first time I’ve really believed that statement. Going into the review, I didn’t really realize that I wanted a solid fitness tracker on a smartwatch, but now I’m really convinced that there is value to such features.

This has been my experience as well.  I’ve never cared to wear a fitness tracker but i’m surprised at how much I pore over the stats of my standing, activity, and workout levels.  The watch also provides a surprisingly effective level of motivation (badges & activity circles).

My activity level (for someone who sits at a desk most of the time) has dramatically increased since the watch (see right; yellow line is when I got the watch).IMG_0369

We used to think that smartphones were the “ubiquitous” technology but there are times I leave it behind.  The watch is always-on and there will be interesting use-cases and challenges in the future.  I’d love to start my car with my watch!

Some other random thoughts:

  • The fitness features are great but I wish there was a better way to view my data:
    • View splits on outdoor runs
    • View all my workouts instead of looking for them in the calendar view.
  • Many reviews i’ve read assume the watch will replace the phone.  But doing any extended activity really tires the shoulders!  My interactions are really limited to much less than 5-10 seconds.
  • I notice that haptic feedback on the wrist is much less jarring and easier to dismiss (i.e., not as disruptive) as vibrating phones on the body.
  • The Apple Watch is made for travel:
    • Most airlines have applets for the watch that make it so easy to keep track of gates, departures, & arrivals.
    • Boarding a plane with your watch feels very futuristic but most pass readers are on the right side and I wear my watch on the left resulting in very awkward wrist positions.  Even when the reader was on the left, it is facing upwards requiring me to turn my wrist downwards.
  • It is unobtrusive and looks like a watch, not like a gizmo on my wrist.
  • Apple Pay belongs on the watch.  I’ve used Apple Pay on my phone but it is much more seamless on the watch.
  • Notifications are great if you pare down what can notify you.  I only get notified of VIP mail (select senders) and text messages.
  • Controlling my thermostat, and other electrical devices from my wrist is pretty great.

Apple Watch Human Factors

watchThe big news in tech last week was the unveiling of the Apple Watch. I think it is a nice moment to discuss a range of human factors topics. (This topic may elicit strong feelings for or against Apple or the idea of a smartwatch but let’s keep it about the science.)

The first is technology adoption/acceptance. Lots of people were probably scratching their heads asking, “who wears a watch, nowadays?” But you do see lots of people wearing fitness bands. Superficially, that contrast seems to demonstrate the Technology Acceptance Model (TAM) in action.  TAM is a way to try to understand when people will adopt new technology. It boils down the essential factors to usability (does it seem easy to use?) and usefulness (does it seem like it will help my work or life?).

Fitness bands check both of the above boxes: since they are essentially single-function devices they are relatively easy to use and tracking fitness is perceived as useful for many people.

Back to the Watch, it may also check off both of the above boxes: it certainly appears easy to use (but we do not know yet), and because it has fitness tracking functions plus many others via apps it certainly may be perceived as useful to the same crowd that buys fitness bands.

The next topic that got me excited was the discussion of the so-called digital crown (shown below). Anne and I have previously studied the contrasts between touch screens and rotary knobs for a variety of computing tasks. Having both choices allows the user select the best input device for the task: touch for pushing big on-screen buttons and large-scale movement and knob for precise, linear movement without obscuring the screen. Using a knob is certainly easier than a touch screen if you have shaky hands or are riding a bumpy cab.

IMG_1082IMG_1083

Two small items of note that were included in the Watch was the use of the two-finger gesture on the watch face to send a heart beat to another user–the same gesture many people intuitively think of when they want to feel their own heart beat.

Finally, the Watch has the ability to send animated emoij to other users. What was noteworthy is the ability to manipulate both eyes and mouth in emoji characters. I couldn’t find any literature but I recall somewhere that there is some cross-cultural differences in how people use and interpret emoji: Western users tend to focus on the mouth while Eastern users tend to focus on the eyes (if you know what reference I’m talking about or if I’m mis-remembering, feel free to comment).

IMG_1084IMG_1085

 

There’s so much I haven’t brought up (haptic and multi-modal feedback, user interface design, automation, voice input and of course privacy)!

 

 

Interesting control/display

Anne sent me an example of, “why haven’t they thought of this before”: an air vent with the temperature display and control knob all in one.

In this article describing the new Audi TT with glass dashboard, they describe the novel control/display/air vent seen in the image above. I guess one problem here is if it is accessible to only the driver or if it’s centrally located.

20140318-175623.jpg

The dashboard (shown in the linked article), however, is another story. While it looks futuristic, it looks like a distraction nightmare!