Mental models inform driver expectations and predictions of vehicle automation. In this chapter, we review the importance and relevance of mental models in driving and automation, define how fundamental psychological mechanisms contribute to their formation, provide a new framing of general and applied mental models and how to measure them, and summarize recent research on methods to support accurate and complete mental model development.
As automation is increasingly used to support control, decision-making, and information processing in complex environments, designers must address the question of how to design systems to foster human–automation coordination (Dekker & Woods, 2002). Automation alters the nature of tasks, and forces operators to adapt in novel ways (see this Handbook, Chapters 6, 8). Specifically, operators are required to monitor automated tasks and manage those tasks that remain by using available feedback from the automated system. This feedback is altered in type and extent to that feedback encountered under manual control. A monitoring role creates new attentional and knowledge demands in that operators must understand how different elements of complex systems operate and interact, in addition to the effect of their inputs on system behavior (Sarter, Woods, & Billings, 1997). In order to maintain safety, operators must correctly understand operating conditions to be able to recognize when and how to act in situations in which the automation may be unable to respond appropriately. For example, when lane markings are masked by rain or from the glare of the sun, a lane centering system may deactivate the vehicle’s steering support. It is difficult for the driver to maintain correct understanding at all times, particularly when a process is controlled by a complex system with inherent uncertainty—which characterizes use of vehicle automation on public roadways.
The introduction of automation into a complex control task like driving can qualitatively shift an operator from an active processor of information to a passive recipient of information, as well as change the type of feedback s/he receives on the state of the system. As a consequence, operators are at times surprised by the behavior of the automation and uncertain of what the automation is doing, why it behaves in a certain manner, and what it will do next (Wiener, 1989; this Handbook, Chapter 21). Empirical research indicates that operators often have gaps and misconceptions in their mental models of automated systems due to the cognitive demand and complexity of operating such systems in real time (Sarter & Woods, 1994). Other potential consequences of automation’s introduction include over- or under-trust of the automation respective to its capabilities, complacent behavior, and behavioral adaptation in safety-degrading ways (Norman, 1990; Sarter et al., 1997; Parasuraman, Mouloua, & Molloy, 1994; Lee & Moray, 1994; this Handbook, Chapter 4, 12).
Until vehicle automation has achieved a level of performance to safely enable drivers to assume the role of a passenger, it is assistance only, and requires drivers to monitor and, more importantly, to understand when and how to act to complement its driving task performance. At present, drivers receive support from forms of vehicle automation, but are still needed to perform object/event detection and response (OEDR), act as a fallback for system malfunctions, and fulfill non-automated operational, tactical, or strategic activities (SAE International, 2018). From a “human-automation coordination” perspective (Lee, 2018), drivers must monitor automated systems as well as the traffic environment because vehicle control and crash avoidance technologies have operational and functional design limitations and may malfunction. In sharing the goal with automation to achieve safe driving in an often unpredictable road environment, monitoring requires more than passive observation; drivers must actively supervise ongoing automation performance and the detection of pre-crash conditions. Supervision of automation in any dynamic, uncertain environment involves information integration and analysis, system expertise, analytical decision-making, sustained attention, and maintenance of manual skills (Bhana, 2010; Casner, Geven, Recker, & Schooler, 2014). Effective development and support of these skills helps operators to troubleshoot and recover if automation fails or if something unexpected happens (Onnasch, Wickens, Li, & Manzey, 2014; Wickens, Li, Santamaria, Sebok, & Sarter, 2010).
For drivers, context-specific system limitations can be difficult to understand (Seppelt, 2009; Larsson, 2012); drivers, consequently, may not recognize their need to act until a conflict is imminent. While events where such detection is needed are rare because current forms of many advanced driver assistance systems (ADAS) technology are relatively robust and react to most threats, to the driver, their occurrence essentially represents an ever-present and unpredictable possibility. Clear, unambiguous understanding of system state and behavior is particularly difficult when detection performance is dependent on sensor quality and environmental conditions (i.e., a dynamic operational design domain; Seppelt, Reimer, Angell, & Seaman, 2017). Common, current limitations may include restrictions in operating speed ranges, operational design domain (e.g. geo-fencing), limits in the amount of steering, braking, and acceleration the system can apply, and limitations in lane and object detection performance (e.g. pedestrians, animals, on-road objects, and oncoming vehicles). Additionally, the driver needs to supervise system failures ranging from sensor blockage to brake pump failure, as these may require immediate intervention. Failures, particularly unpredictable ones or those with an unspecified cause, create uncertainty in the interpretation of automation’s behavior and inaccurate mental models. For example, Kazi, Stanton, and Harrison (2004) exposed drivers to different levels of reliability of an adaptive cruise control (ACC) system and found that failures involving random accelerations of ACC led to mental models of its operation that were predominately flawed. Accurate, well-developed mental models are critical to successful interaction between human operators and automated systems; inaccurate mental models lead to operator misunderstandings and inappropriate use.
Of particular interest in this chapter is the influence of mental models on automation use. An operator’s expectations of system behavior are guided by the completeness and correctness of his/her mental model (Sarter et al., 1997). Mental models influence the way people allocate their attention to automated systems (Moray, 1986), and in turn influence their response, in which an operator’s decision to engage or disengage a system reflects his/her prediction of its behavior (Gentner & Stevens, 1983). Mental models clearly impact how people use automation; but, what is a mental model?
A mental model is an operator’s knowledge of system purpose and form, its functioning, and associated state structure (Johnson-Laird, 1983; Rouse & Morris, 1986). Mental models represent working knowledge of system dynamics and structure of physical appearance and layout, and of causal relationships between and among components and processes (Moray, 1999). Mental models allow people to account for and to predict the behavior of physical systems (Gentner & Stevens, 1983). According to Johnson-Laird (1983), mental models enable people to draw inferences and make predictions, to decide on their actions, and to control system operation. In allowing an operator to predict and explain system behavior, and to recognize and remember relationships between system components and events (Wilson & Rutherford, 1989), mental models provide a source to guide people’s expectations (Wickens, 1984).
Further elaborating on this role of mental models for generating predictions, the predictive processing framework (Clark, 2013) has emerged as a powerful new approach to understanding and modeling mental models (Engström et al., 2018). Within the predictive processing framework, a mental model can be seen as a hierarchical generative model. The hierarchical generative model is embodied in the brain to generate predictions, after learning over time how states and events in the world, or one’s own body, generate sensory input. Predictive processing suggests that frequent exposure to reliable statistical regularities in the driving environment will lead to improvement of generative model predictions and increasingly automatized performance. Further, failures may be understood in terms of limited exposure to functional limitations, and therefore, as an inappropriately tuned generative model in such situations.
Consequently, for the driver’s generative model to become better at predicting automation, it needs feedback (sensory input) on statistical regularities. Automation can be designed to provide transparent feedback on the state of the automation and has been shown to improve performance (Bennett & Flach, 2011; Flemisch, Winner, Bruder, & Bengler, 2014; Seppelt & Lee, 2007; 2019; Vicente & Rasmussen, 1992). Dynamic human–machine interface (HMI) feedback can provide statistical regularities of sensory input that the model can tune to. If designed to continuously communicate to drivers the evolving relationship between system performance and operating limits, HMIs can support the development of a generative model to properly predict what the automation can and cannot do (see e.g., this Handbook, Chapter 15). It is in understanding both the proximity to and type of system limitation (e.g., sensor range, braking capability, functional velocity range, etc.) that drivers are able to initiate proactive intervention in anticipation of automation failure (Seppelt & Lee, 2019).
Mental models often include approximations and incompleteness because they serve as working heuristics (Payne, 1991). In functioning as heuristics, meant to enable operators to construct simplified representations of complex systems in order to reduce the amount of mental effort required to use them, mental models are notoriously incomplete, unstable, non-technical, and parsimonious (Norman, 1983). Further, drivers often adopt a satisficing strategy rather than an optimal one in trying to satisfy their needs, and are influenced by motivational and emotional factors (Boer & Hoedemaeker, 1998; Summala, 2007). Thus, there is often no single conceptual form that can be defined as accurate and complete. The merit of a mental model, therefore, is not based on its technical depth and specificity, but in its ability to enable a user to accurately (or satisfactorily) predict the behavior of a system.
The means of organizing knowledge into structured patterns affords rapid and flexible processing of information that translates into rapid responses such as the decision to rely on the automation when operating an automated system (Rumelhart & Ortany, 1977). Mental models are constructed from system-provided information, the environment, and instructions (Norman, 1986). They depend on and are updated from the feedback provided to operators (e.g., as from proprioceptive feedback or in information displays). The accuracy of mental models depends on the effectiveness of system interfaces (Norman, 1988) and the variety of situations encountered. Inaccuracies in mental models are more likely the more unreliable the automation (Kazi et al., 2004). Mental model accuracy may be improved with feedback that informs the operator of the automation’s behavior in a variety of situations (Stanton & Young, 2005). For operators who interact with automation that in turn interacts with the environment, such as drivers, it is important that their mental models of the automated system include an understanding of the environment’s effect on its behavior. For judgments made under uncertainty, such as the decision to rely on automated systems that control all or part of a complex, dynamic process, two types of cognitive mechanisms—intuition and reasoning—are at work (Kahneman & Frederick, 2002; Stanovich & West, 2002). Intuition (System 1) is characterized by fast, automatic, effortless, and associative operations—those similar to features of perceptual processes. Reasoning (System 2) is characterized by slow, serial, effortful, deliberately controlled, and often relatively flexible and rule-governed operations. System 1 generates impressions that factor into judgments while System 2 is involved directly in all judgments, whether they originate from impressions or from deliberate reasoning.
Consequently, under uncertainty or under time pressure, mental models are subject to cognitive biases—systematic patterns of deviation from norm or rationality in judgment (Haselton, Nettle, & Andrews, 2005). Although there are a large variety of cognitive biases, examples of important biases affecting mental models of automation include
Thus, it can be expected that drivers may use biased, uncertain, or incomplete knowledge when operating automation. Good automated system designs should strive to minimize the effect of these known human biases and limitations, and to measure the accuracy of mental models.
Cognitivism theories (e.g., Shiffrin & Schneider, 1977; Rumelhart & Norman, 1981) commonly reference two types of knowledge, which are relevant to how we define mental models: (1) Declarative knowledge: knowledge that is acquired by education; and that can be literally expressed; it is the knowledge of something; and (2) Procedural knowledge: knowledge that is acquired by practice; and that can be applied to something; it is the knowledge of how to do something.
In Figure 3.1, the declarative knowledge type corresponds to the “General Mental Model” and the procedural knowledge type corresponds to the “Applied Mental Model.” A general mental model includes general system knowledge (i.e., understanding prior to interaction). An applied mental model is a dynamic, contextually-driven understanding of system behavior and interaction (i.e., understanding during interaction).
Figure 3.1 Conceptual model of the influence of a driver’s mental model on automation reliance.
Prior to the use of an automated system, a driver’s initial mental model is constructed based on a variety of information sources, which may include a vehicle owner’s manual, perceptions of other related technologies, word-of-mouth, marketing information, etc. This information is “general” in the sense that it may contain a basic understanding of system purpose and behavior, as well as a set of operating rules and conditions for proper use, but does not prescribe how the driver should behave specific to the interaction of system state with environmental condition. For example, a driver may understand as part of his/her “general” mental model that ACC does not work effectively in rainy conditions, but may not have a well-developed “applied” mental model to understand the combination of road conditions and rain densities that result in unreliable detection of a lead vehicle. As a driver gains experience using an automated system in a variety of situations, s/he develops his/her “applied” mental model, which is conceptually consistent with the idea of “situation models” connecting bottom-up environmental input with top-down knowledge structures (Durso, Rawson, & Girotto, 2007). As depicted in Figure 3.1, both “general” and “applied” mental models affect reliance action (also see in this Handbook Chapter 4). A correct “general” mental model is important in the construction of an adequate “applied” model when experiencing concrete situations on the road and for selecting appropriate actions (Cotter & Mogilka, 2007; Seppelt & Lee, 2007). In turn, experience updates the “general” mental model. A mismatch between a driver’s general mental model and experience can negatively affect trust and acceptance (Lee & See, 2004). To help explain drivers’ reliance decisions and vehicle automation use behaviors in the short- and long term, it is necessary to measure mental models.
Self-report techniques are commonly used to evaluate and articulate differences in mental models. Operators are asked to explicitly define their understanding of a system via questionnaire or interview (e.g., Kempton, 1986; Payne, 1991). Multiple biases can be introduced in this type of data gathering process relating to social/communication and background/experience from both the participant and the analyst (Revell & Stanton, 2017). However, research on trust in automation has provided a reference for determining which aspects of a general mental model are important for correct system use (Kazi et al., 2007; Makoto, 2012). Healthy, calibrated trust occurs if information on a system’s purpose, process, and performance (PPP) are supplied to drivers (Lee & See, 2004). Recently developed questionnaires probe the driver’s understanding of these information types. Beggiato and Krems (2013) developed a 32-item questionnaire on ACC functionality in specific PPP situations. Questions covered general ACC functionality (e.g., “ACC maintains a predetermined speed in an empty lane”) as well as system limitations described in the owner’s manual (e.g., “ACC detects motorcycles”). Seppelt (2009) developed a 16-item questionnaire assessing participants’ knowledge of the designer’s intended use of ACC (i.e., purpose), of its operational sensing range and event detection capabilities (i.e., process), and of its behavior in specific use cases (i.e., performance). These questionnaires provide a template for assessment of other automated technologies.
As stated, in functioning as heuristics, mental models often have no single conceptual form that can be defined as accurate and complete. It is therefore important to assess a driver’s procedural knowledge in addition to his/her declarative mental model, that is, to evaluate the behavioral and performance output from “running” his/her mental model. Such procedural knowledge, or a driver’s applied mental model, is accessed through analysis of driver behavioral measures, e.g., response time and response quality to events, eye movements, or observation of errors. For an overview of measures of mental models, see Table 3.1.
General Mental Model Measures |
|
Applied Mental Model Measures |
|
The set of measures listed in Table 3.1 offers a starting point for how to assess general and applied mental models of vehicle automation. Further research is required to assess whether and to what extent the above measures provide practical insight into driver’s reliance behavior for the diversity of automated technologies and their real-world use conditions. Further knowledge on how behavior changes in the longer term, after extended use of automated systems, as a function of the type and complexity of a driver’s mental model, is also important.
In order to safely use automated systems, drivers must fundamentally understand their role responsibility on a moment-by-moment basis. In essence, they must be able to accurately translate their general mental model to the applied mental model question of “who is responsible and most capable to react to the present vehicle and situation dynamics: me or the automated system?”. Recent research points to both general confusion about the capabilities of deployed forms of vehicle automation (i.e., inaccurate general mental models; e.g., Abraham, Seppelt, Mehler, & Reimer, 2017) as well as an incomplete understanding and expectations regarding limitations of automation (e.g., Victor et al., 2018). Based on current system design practices, there seems to be a fundamental disconnect between driver’s general and applied mental models. For example, in Victor et al. (2018), drivers were trained prior to use on the limits of highly reliable (but not perfect) automation. However, in practice, they experienced reliable system operation until the final moment of the study. Regardless of the amount of initial training on system limitations, 30% of drivers crashed into the object they “knew” was outside the detection capabilities of the system. Without reinforcement of the general mental model by the dynamic experience, information decayed or was dominated by dynamic learned trust. Consistent with previous findings on the development of mental models of vehicle automation relative to initial information (Beggiato, Pereira, Petzoldt, & Krems, 2015), this study found that system limitations initially described but not experienced tend to disappear from a driver’s mental model. Research to date on mental models of vehicle automation indicate a need to support and/or train driver’s understanding of in situ vehicle limitations and capabilities through dynamic HMI information (Seppelt & Lee, 2019; this Handbook, Chapters 15, 16, 18), routine driver training, and/or using intelligent tutoring systems.
This chapter described the importance and relevance of mental models in driving and automation, defined how fundamental psychological mechanisms contribute to their formation, provided a new framing of general and applied mental models and how to measure them, and concluded with a review of recent research on how to support accurate and complete mental model development. Future research needs to examine relationships between mental models, trust, and both short- and long-term acceptance across types and combinations of vehicle automation.