[humanautonomy.com] Dr. Mica Endsley: Current Challenges and Future Opportunities In Human-Autonomy Research

We had a chance to interview Dr. Mica Endsley about her thoughts on autonomy.

The social science research that we cover in this blog is carried out by a multitude of talented scientists across the world; each studying a different facet of the problem. In our second post in a new series, we interview one the leaders in the study of the human factors of autonomy, Dr. Mica Endsley.

Down on the farm: Human factors psychologist Margaux Ascherl optimizes technology to make farming more efficient

Complimenting the previous post about applied psychology, this new article dives into how one human factors PhD, Margaux Ascherl, is working to make farming more efficient with technology (she also happens to be my former student!):

The world’s population of 7.3 billion is predicted to grow to 9.7 billion by 2050, according to the Global Harvest Initiative. To feed all those people, global agricultural productivity must increase by 1.75 percent annually.

One person working to drive this increase is Margaux Ascherl, PhD, user experience leader at John Deere Intelligent Solutions Group in Urbandale, Iowa. John Deere recruited Ascherl in late 2012 while she was finishing her PhD in human factors psychology at Clemson University. Five years later, she now leads a team responsible for the design and testing of precision agriculture technology used in John Deere equipment.

Ascherl spoke to the Monitor about what it’s like to apply psychology in an agricultural context and how her team is helping farmers embrace new technology to feed the world.

Human-Robot/AI Relationships: Interview with Dr. Julie Carpenter

Over at https://HumanAutonomy.com, we had a chance to interview Dr. Julie Carpenter about her research on human-robot/AI relationships.

As the first post in a series, we interview one the pioneers in the study of human-AI relationships, Dr. Julie Carpenter. She has over 15 years of experience in human-centered design and human-AI interaction research, teaching, and writing. Her principal research is about how culture influences human perception of AI and robotic systems and the associated human factors such as user trust and decision-making in human-robot cooperative interactions in natural use-case environments.

Throwback Thursday: A model for types and levels of automation [humanautonomy.com]

This week’s Throwback Thursday post (next door, at humanautonomy.com) covers another seminal paper in the study of autonomy:

This is our second post on our “throwback” series. In this paper, I will take you through an article written by the best in the human factors and ergonomics field, the late Raja Parasuraman, Tom Sheridan, and Chris Wickens. Though several authors have introduced the concept of automation being implemented at various levels, for me this article nailed it.

Throwback Thursday: The Ironies of Automation [humanautonomy.com]

My third job (in addition to being a professor, and curating this blog) is working on another blog with Arathi Sethumadhavan focused on the social science of autonomy and automation.  You can find us over here.

Occasionally, I will cross-post items that might be of interest to both readerships.  Over there, we’re starting a new series of posts called Throwback Thursdays where we go back in time to review some seminal papers in the history of human-automation interaction (HAI), but for a lay audience.

The first post discusses Bainbridge’s 1983 paper discussing the “Ironies of Automation”:

Don’t worry, our Throwback Thursday doesn’t involve embarrassing pictures of me or Arathi from 5 years ago.  Instead, it is more cerebral.  The social science behind automation and autonomy is long and rich, and despite being one of the earliest topics of study in engineering psychology, it has even more relevance today.

In this aptly titled paper, Bainbridge discusses, back in 1983(!), the ironic things that can happen when humans interact with automation.  The words of this paper ring especially true today when the design strategy of some companies is to consider the human as an error term to be eliminated

 

Did a User Interface Kill 10 Navy Sailors?

I chose a provocative title for this post after reading the report on what caused the wreck of the USS John McCain in August of 2017. A summary of the accident is that the USS John McCain was in high-traffic waters when they believed they lost control of steering the ship. Despite attempts to slow or maneuver, it was hit by another large vessel. The bodies of 10 sailors were eventually recovered and five others suffered injury.

Today marks the final report on the accident released by the Navy. After reading it, it seems to me the report blames the crew. Here are some quotes from the offical Naval report:

  • Loss of situational awareness in response to mistakes in the operation of the JOHN S MCCAIN’s steering and propulsion system, while in the presence of a high density of maritime traffic
  • Failure to follow the International Nautical Rules of the Road, a system of rules to govern the maneuvering of vessels when risk of collision is present
  • Watchstanders operating the JOHN S MCCAIN’s steering and propulsion systems had insufficient proficiency and knowledge of the systems

And a rather devestating:

In the Navy, the responsibility of the Commanding Officer for his or her ship is absolute. Many of the decisions made that led to this incident were the result of poor judgment and decision making of the Commanding Officer. That said, no single person bears full responsibility for this incident. The crew was unprepared for the situation in which they found themselves through a lack of preparation, ineffective command and control and deficiencies in training and preparations for navigation.

Ouch.

Ars Technica called my attention to an important but not specifically called out reason for the accident: the poor feedback design of the control system. I think it is a problem that the report focused on “failures” of the people involved, not the design of the machines and systems they used. After my reading, I would summarize the reason for the accident as “The ship could be controlled from many locations. This control was transferred using a computer interface. That interface did not give sufficient information about its current state or feedback about what station controlled what functions of the ship. This made the crew think they had lost steering control when actually that control had just been moved to another location.” I based this on information from the report, including:

Steering was never physically lost. Rather, it had been shifted to a different control station and watchstanders failed to recognize this configuration. Complicating this, the steering control transfer to the Lee Helm caused the rudder to go amidships (centerline). Since the Helmsman had been steering 1-4 degrees of right rudder to maintain course before the transfer, the amidships rudder deviated the ship’s course to the left.

Even this section calls out the “failure to recognize this configuration.” If the system is designed well, one shouldn’t have to expend any cognitive or physical resources to know from where the ship is being controlled.

Overall I was surprised at the tone of this report regarding crew performance. Perhaps some is deserved, but without a hard look at the systems the crew use, I don’t have much faith we can avoid future accidents. Fitts and Jones were the start of the human factors field in 1947, when they insisted that the design of the cockpit created accident-prone situations. This went against the beliefs of the times, which was that “pilot error” was the main factor. This ushered in a new era, one where we try to improve the systems people must use as well as their training and decision making. The picture below is of the interface of the USS John S McCain, commissioned in 1994. I would be very interested to see how it appears in action.

US Navy (USN) Boatswain’s Mate Seaman (BMSN) Charles Holmes mans the helm aboard the USN Arleigh Burke Class Guided Missile Destroyer USS JOHN S. MCCAIN (DDG 56) as the ship gets underway for a Friends and Family Day cruise. The MCCAIN is getting underway for a Friends and Family Day cruise from its homeport at Commander Fleet Activities (CFA) Yokosuka Naval Base (NB), Japan (JPN). Source: Wikimedia Commons

“Applied psychology is hot, and it’s only getting hotter”…and one more thing

The American Psychological Association’s member magazine, the Monitor, recently highlighted 10 trends in 2018.  One of those trends is that Applied Psychology is hot!

In this special APA Monitor report, “10 Trends to Watch in Psychology,” we explore how several far-reaching developments in psychology are transforming the field and society at large.

Our own Anne Mclaughlin, along with other prominent academics and industry applied psychologists were quoted in the article:

As technology changes the way we work, play, travel and think, applied psychologists who understand technology are more sought after than ever, says Anne McLaughlin, PhD, a professor of human factors and applied cognition in the department of psychology at North Carolina State University and past president of APA’s Div. 21 (Applied Experimental and Engineering Psychology).

Also quoted was Arathi Sethumadhavan:

Human factors psychologist Arathi Sethumadhavan, PhD, has found almost limitless opportunities in the health-care field since finishing her graduate degree in 2009. Though her background was in aviation, she found her human factors skills transferred easily to the medical sector—and those skills have been in demand.

One more thing…

Arathi and I have recently started a new blog, Human-Autonomy Sciences, devoted to the psychology of human-autonomy interaction.  We hope you visit it and contribute to the discussion!

Designing the technology of ‘Blade Runner 2049’

The original Bladerunner is my favorite movie and can be credited as sparking my interest in human-technology/human-autonomy interactions.  The sequel is fantastic if you have not seen it (I’ve seen it twice already and soon a third).

If you’ve seen the original or sequel, the representations of incidental technologies may have seemed unusual.  For example, the technologies feel like a strange hybrid of digital/analog systems, they are mostly voice controlled, and the hardware and software has a well-worn look.  Machines also make satisfying noises as they are working (also present in the sequel).  This is a refreshing contrast to the super clean, touch-based, transparent augmented reality displays shown in other movies.

This really great post/article from Engadget [WARNING CONTAINS SPOILERS] profiles the company that designed the technology shown in the movie Bladerunner 2049.  I’ve always been fascinated by futuristic UI concepts shown in movies.  What is the interaction like?  Information density? Multi-modal?  Why does it work like that and does it fit in-world?

The article suggests that the team really thought deeply about how to portray technology and UI by thinking about the fundamentals (I would love to have this job):

Blade Runner 2049 was challenging because it required Territory to think about complete systems. They were envisioning not only screens, but the machines and parts that would made them work.

With this in mind, the team considered a range of alternate display technologies. They included e-ink screens, which use tiny microcapsules filled with positive and negatively charged particles, and microfiche sheets, an old analog format used by libraries and other archival institutions to preserve old paper documents.

 

Outside Magazine profile’s Anne’s rock climbing & human factors research

Anne’s research on attention and rock climbing was recently featured in an article in Outside Magazine:

To trad climb is to be faced with hundreds of such split-second micro decisions, the consequences of which can be fatal. That emphasis on human judgment and its fallibility intrigued Anne McLaughlin, a psychology professor at North Carolina State University. An attention and behavior researcher, she set out to model how and why rock climbers make decisions, and she’d recruited Weil and 31 other trad climbers to contribute data to the project.

The idea for the study first came about at the crag. In 2011, McLaughlin, Chris Wickens, a psychology professor at Colorado State University, and John Keller, an engineer at Alion Science and Technology, converged in Las Vegas for the Human Factors and Ergonomics Society conference, an annual event that brings together various professionals practicing user-focused product design. With Red Rocks just a few minutes away, the three avid climbers were eager to get some time on the rock before the day’s sessions, says Keller, even if it meant starting at 3 a.m.

Tesla counterpoint: “40% reduction in crashes” with introduction of Autosteer

I posted yesterday about the challenges of fully autonomous cars and cars that approach autonomy. Today I bring you a story about the successes of semi-automatic features in automobiles.

Tesla has a feature called Autopilot that assists the driver without being completely autonomous. Autopilot includes car-controlled actions such as collision warnings, automatic emergency braking, and automatic lane keeping. Tesla classifies the Autopilot features as Level 2 automation. (Level 5 is considered fully autonomous). Rich has already given our thoughts about calling this Autopilot in a previous post. One particular feature is called AutoSteer, described in the NHTSA report as:

The Tesla Autosteer system uses information from the forward-looking camera, the radar sensor, and the ultrasonic sensors, to detect lane markings and the presence of vehicles and objects to provide automated lane-centering steering control based on the lane markings and the vehicle directly in front of the Tesla, if present. The Tesla owner’s manual contains the following warnings: 1) “Autosteer is intended for use only on highways and limited-access roads with a fully attentive driver. When using Autosteer, hold the steering wheel and be mindful of road conditions and surrounding traffic. Do not use Autosteer on city streets, in construction zones, or in areas where bicyclists or pedestrians may be present. Never depend on Autosteer to determine an appropriate driving path. Always be prepared to take immediate action. Failure to follow these instructions could cause serious property damage, injury or death;” and 2) “Many unforeseen circumstances can impair the operation of Autosteer. Always keep this in mind and remember that as a result, Autosteer may not steer Model S appropriately. Always drive attentively and be prepared to take immediate action.” The system does not prevent operation on any road types.

An NHTSA report looking into a fatal Tesla crash also noted that the introduction of Autosteer corresponded to a 40% reduction in automobile crashes. That’s a lot considering Dr. Gill Pratt from Toyota said he might be happy with a 1% change.

Autopilot was enabled in October, 2015, so there has been a good period of time for post-autopilot crash data to be generated.

Not blaming the user since 2007!

%d bloggers like this: