All posts by Richard Pak

Associate Professor at Clemson University/Department of Psychology

[] Dr. Mica Endsley: Current Challenges and Future Opportunities In Human-Autonomy Research

We had a chance to interview Dr. Mica Endsley about her thoughts on autonomy.

The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. In our second post in a new series, we interview one the leaders in the study of the human factors of autonomy, Dr. Mica Endsley.

Down on the farm: Human factors psychologist Margaux Ascherl optimizes technology to make farming more efficient

Complimenting the previous post about applied psychology, this new article dives into how one human factors PhD, Margaux Ascherl, is working to make farming more efficient with technology (she also happens to be my former student!):

The world’s population of 7.3 billion is predicted to grow to 9.7 billion by 2050, according to the Global Harvest Initiative. To feed all those people, global agricultural productivity must increase by 1.75 percent annually.

One person working to drive this increase is Margaux Ascherl, PhD, user experience leader at John Deere Intelligent Solutions Group in Urbandale, Iowa. John Deere recruited Ascherl in late 2012 while she was finishing her PhD in human factors psychology at Clemson University. Five years later, she now leads a team responsible for the design and testing of precision agriculture technology used in John Deere equipment.

Ascherl spoke to the Monitor about what it’s like to apply psychology in an agricultural context and how her team is helping farmers embrace new technology to feed the world.

Human-Robot/AI Relationships: Interview with Dr. Julie Carpenter

Over at, we had a chance to interview Dr. Julie Carpenter about her research on human-robot/AI relationships.

As the first post in a series, we interview one the pioneers in the study of human-AI relationships, Dr. Julie Carpenter. She has over 15 years of experience in human-centered design and human-AI interaction research, teaching, and writing. Her principal research is about how culture influences human perception of AI and robotic systems and the associated human factors such as user trust and decision-making in human-robot cooperative interactions in natural use-case environments.

Throwback Thursday: A model for types and levels of automation []

This week’s Throwback Thursday post (next door, at covers another seminal paper in the study of autonomy:

This is our second post on our “throwback” series. In this paper, I will take you through an article written by the best in the human factors and ergonomics field, the late Raja Parasuraman, Tom Sheridan, and Chris Wickens. Though several authors have introduced the concept of automation being implemented at various levels, for me this article nailed it.

Throwback Thursday: The Ironies of Automation []

My third job (in addition to being a professor, and curating this blog) is working on another blog with Arathi Sethumadhavan focused on the social science of autonomy and automation.  You can find us over here.

Occasionally, I will cross-post items that might be of interest to both readerships.  Over there, we’re starting a new series of posts called Throwback Thursdays where we go back in time to review some seminal papers in the history of human-automation interaction (HAI), but for a lay audience.

The first post discusses Bainbridge’s 1983 paper discussing the “Ironies of Automation”:

Don’t worry, our Throwback Thursday doesn’t involve embarrassing pictures of me or Arathi from 5 years ago.  Instead, it is more cerebral.  The social science behind automation and autonomy is long and rich, and despite being one of the earliest topics of study in engineering psychology, it has even more relevance today.

In this aptly titled paper, Bainbridge discusses, back in 1983(!), the ironic things that can happen when humans interact with automation.  The words of this paper ring especially true today when the design strategy of some companies is to consider the human as an error term to be eliminated


“Applied psychology is hot, and it’s only getting hotter”…and one more thing

The American Psychological Association’s member magazine, the Monitor, recently highlighted 10 trends in 2018.  One of those trends is that Applied Psychology is hot!

In this special APA Monitor report, “10 Trends to Watch in Psychology,” we explore how several far-reaching developments in psychology are transforming the field and society at large.

Our own Anne Mclaughlin, along with other prominent academics and industry applied psychologists were quoted in the article:

As technology changes the way we work, play, travel and think, applied psychologists who understand technology are more sought after than ever, says Anne McLaughlin, PhD, a professor of human factors and applied cognition in the department of psychology at North Carolina State University and past president of APA’s Div. 21 (Applied Experimental and Engineering Psychology).

Also quoted was Arathi Sethumadhavan:

Human factors psychologist Arathi Sethumadhavan, PhD, has found almost limitless opportunities in the health-care field since finishing her graduate degree in 2009. Though her background was in aviation, she found her human factors skills transferred easily to the medical sector—and those skills have been in demand.

One more thing…

Arathi and I have recently started a new blog, Human-Autonomy Sciences, devoted to the psychology of human-autonomy interaction.  We hope you visit it and contribute to the discussion!

Designing the technology of ‘Blade Runner 2049’

The original Bladerunner is my favorite movie and can be credited as sparking my interest in human-technology/human-autonomy interactions.  The sequel is fantastic if you have not seen it (I’ve seen it twice already and soon a third).

If you’ve seen the original or sequel, the representations of incidental technologies may have seemed unusual.  For example, the technologies feel like a strange hybrid of digital/analog systems, they are mostly voice controlled, and the hardware and software has a well-worn look.  Machines also make satisfying noises as they are working (also present in the sequel).  This is a refreshing contrast to the super clean, touch-based, transparent augmented reality displays shown in other movies.

This really great post/article from Engadget [WARNING CONTAINS SPOILERS] profiles the company that designed the technology shown in the movie Bladerunner 2049.  I’ve always been fascinated by futuristic UI concepts shown in movies.  What is the interaction like?  Information density? Multi-modal?  Why does it work like that and does it fit in-world?

The article suggests that the team really thought deeply about how to portray technology and UI by thinking about the fundamentals (I would love to have this job):

Blade Runner 2049 was challenging because it required Territory to think about complete systems. They were envisioning not only screens, but the machines and parts that would made them work.

With this in mind, the team considered a range of alternate display technologies. They included e-ink screens, which use tiny microcapsules filled with positive and negatively charged particles, and microfiche sheets, an old analog format used by libraries and other archival institutions to preserve old paper documents.


Outside Magazine profile’s Anne’s rock climbing & human factors research

Anne’s research on attention and rock climbing was recently featured in an article in Outside Magazine:

To trad climb is to be faced with hundreds of such split-second micro decisions, the consequences of which can be fatal. That emphasis on human judgment and its fallibility intrigued Anne McLaughlin, a psychology professor at North Carolina State University. An attention and behavior researcher, she set out to model how and why rock climbers make decisions, and she’d recruited Weil and 31 other trad climbers to contribute data to the project.

The idea for the study first came about at the crag. In 2011, McLaughlin, Chris Wickens, a psychology professor at Colorado State University, and John Keller, an engineer at Alion Science and Technology, converged in Las Vegas for the Human Factors and Ergonomics Society conference, an annual event that brings together various professionals practicing user-focused product design. With Red Rocks just a few minutes away, the three avid climbers were eager to get some time on the rock before the day’s sessions, says Keller, even if it meant starting at 3 a.m.

Tesla is wrong to use “autopilot” term

Self driving cars are a hot topic!   See this Wikipedia page on Autonomous cars for a short primer.  This post is mainly a bit of exploration of how the technology is presented to the user.

Tesla markets their self driving technology using the term “Autopilot”.  The German government is apparently unhappy with the use of that term because it could be misleading (LA Times):

Germany’s transport minister told Tesla to cease using the Autopilot name to market its cars in that country, under the theory that the name suggests the cars can drive themselves without driver attention, the news agency Reuters reported Sunday.

Tesla wants to be perceived as first to market with a fully autonomous car (using the term Autopilot) yet they stress that it is only a driver assistance system and that the driver is meant to stay vigilant.  But I do not think term Autopilot is perceived that way by most lay people.  It encourages an unrealistic expectation and may lead to uncritical usage and acceptance of the technology, or complacency.

Complacency can be described and manifested as:

  • too much trust in the automation (more than warranted)
  • allocation of attention to other things and not monitoring the proper functioning of automation
  • over-reliance on the automation (letting it carry out too much of the task)
  • reduced awareness of one’s surroundings (situation awareness)

Complacency is especially dangerous when unexpected situations occur and the driver must resume manual control.  The non-profit Consumer Reports says:

“By marketing their feature as ‘Autopilot,’ Tesla gives consumers a false sense of security,” says Laura MacCleery, vice president of consumer policy and mobilization for Consumer Reports. “In the long run, advanced active safety technologies in vehicles could make our roads safer. But today, we’re deeply concerned that consumers are being sold a pile of promises about unproven technology. ‘Autopilot’ can’t actually drive the car, yet it allows consumers to have their hands off the steering wheel for minutes at a time. Tesla should disable automatic steering in its cars until it updates the program to verify that the driver’s hands are on the wheel.”

Companies must commit immediately to name automated features with descriptive—not exaggerated—titles, MacCleery adds, noting that automakers should roll out new features only when they’re certain they are safe.

Tesla responded that:

“We have great faith in our German customers and are not aware of any who have misunderstood the meaning, but would be happy to conduct a survey to assess this.”

But Tesla is doing a disservice by marketing their system using the term AutoPilot and by selectively releasing video of the system performing flawlessly:

Using terms such as Autopilot, or releasing videos of near perfect instances of the technology will only hasten the likelihood of driver complacency.

But no matter how they are marketed, these systems are just machines that rely on high quality sensor input (radar, cameras, etc).  Sensors can fail, GPS data can be old, or situations can change quickly and dramatically (particularly on the road).  The system WILL make a mistake–and on the road, the cost of that single mistake can be deadly.

Parasuraman and colleagues have heavily researched how humans behave when exposed to highly reliable automation in the context of flight automation/autopilot systems.  In a classic study, they first induced a sense of complacency by exposing participants to highly reliable automation.  Later,  when the automation failed, the more complacent participants were much worse at detecting the failure (Parasuraman, Molloy, & Singh, 1993).

Interestingly, when researchers examined very autonomous autopilot systems in aircraft, they found that pilots were often confused or distrustful of the automation’s decisions (e.g., initiating course corrections without any pilot input) suggesting LOW complacency.  But it is important to note that pilots are highly trained, and have probably not been subjected to the same degree of effusively positive marketing that the public is being subjected regarding the benefits of self-driving technology.  Tesla, in essence, tells drivers to “trust us“, further increasing the likelihood of driver complacency:

We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. Eight surround cameras provide 360 degree visibility around the car at up to 250 meters of range. Twelve updated ultrasonic sensors complement this vision, allowing for detection of both hard and soft objects at nearly twice the distance of the prior system. A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength, capable of seeing through heavy rain, fog, dust and even the car ahead.

To make sense of all of this data, a new onboard computer with more than 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software. Together, this system provides a view of the world that a driver alone cannot access, seeing in every direction simultaneously and on wavelengths that go far beyond the human senses.


Parasuraman, R., & Molloy, R. (1993). Performance consequences of automation-induced“complacency.” International Journal of Aviation Psychology, 3(1), 1-23.

Some other key readings on complacency:

Parasuraman, R. (2000). Designing automation for human use: empirical studies and quantitative models. Ergonomics, 43(7), 931–951.

Parasuraman, R., & Wickens, C. D. (2008). Humans: Still vital after all these years of automation. Human Factors, 50(3), 511–520.

Parasuraman, R., Manzey, D. H., & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381–410.


Human Factors Potpourri

Some recent items in the news with a human factors angle:

  • What happened to Google Maps?  Interesting comparison of Google Maps from 2010/2016 by designer/cartographer Justin O’Beirne.
  • India will use 3D paintings to slow down drivers.  Excellent use of optical illusions for road safety.
  • Death by GPS.  GPS mis-routing is the easiest and most relatable example of human-automaiton interaction.  Unfortunately, to its detriment, this article does not discuss the automation literature, instead focusing on more basic processes that, I think, are less relevant.