Driver’s Mental Model of Vehicle Automation

Authored by: Bobbie Seppelt , Trent Victor

Handbook of Human Factors for Automated, Connected, and Intelligent Vehicles

Print publication date:  June  2020
Online publication date:  May  2020

Print ISBN: 9781138035027
eBook ISBN: 9781315269689
Adobe ISBN:

10.1201/b21974-3

 

Abstract

Mental models inform driver expectations and predictions of vehicle automation. In this chapter, we review the importance and relevance of mental models in driving and automation, define how fundamental psychological mechanisms contribute to their formation, provide a new framing of general and applied mental models and how to measure them, and summarize recent research on methods to support accurate and complete mental model development.

 Add to shortlist  Cite

Driver’s Mental Model of Vehicle Automation

Key Points

  • An operator’s expectations of system behavior are guided by the completeness and correctness of his/her mental model.
  • The accuracy and completeness of mental models depend on the effectiveness of system interfaces and on the variety of situations drivers encounter.
  • Drivers may use biased, uncertain, or incomplete knowledge when operating automation. Good automated system design should strive to account for and mitigate these known human biases and processing limitations.
  • Both “general” and “applied” mental models affect reliance action. A mismatch between a driver’s general and applied mental models can negatively affect trust and acceptance of vehicle automation.
  • Multiple qualitative and quantitative measures can be used to assess the impact of mental models on driver’s reliance on automation.
  • To support accurate and complete mental model development, drivers need information on the purpose, process, and performance of automated systems.

3.1  Importance and Relevance of Mental Models in Driving and Automation

As automation is increasingly used to support control, decision-making, and information processing in complex environments, designers must address the question of how to design systems to foster human–automation coordination (Dekker & Woods, 2002). Automation alters the nature of tasks, and forces operators to adapt in novel ways (see this Handbook, Chapters 6, 8). Specifically, operators are required to monitor automated tasks and manage those tasks that remain by using available feedback from the automated system. This feedback is altered in type and extent to that feedback encountered under manual control. A monitoring role creates new attentional and knowledge demands in that operators must understand how different elements of complex systems operate and interact, in addition to the effect of their inputs on system behavior (Sarter, Woods, & Billings, 1997). In order to maintain safety, operators must correctly understand operating conditions to be able to recognize when and how to act in situations in which the automation may be unable to respond appropriately. For example, when lane markings are masked by rain or from the glare of the sun, a lane centering system may deactivate the vehicle’s steering support. It is difficult for the driver to maintain correct understanding at all times, particularly when a process is controlled by a complex system with inherent uncertainty—which characterizes use of vehicle automation on public roadways.

The introduction of automation into a complex control task like driving can qualitatively shift an operator from an active processor of information to a passive recipient of information, as well as change the type of feedback s/he receives on the state of the system. As a consequence, operators are at times surprised by the behavior of the automation and uncertain of what the automation is doing, why it behaves in a certain manner, and what it will do next (Wiener, 1989; this Handbook, Chapter 21). Empirical research indicates that operators often have gaps and misconceptions in their mental models of automated systems due to the cognitive demand and complexity of operating such systems in real time (Sarter & Woods, 1994). Other potential consequences of automation’s introduction include over- or under-trust of the automation respective to its capabilities, complacent behavior, and behavioral adaptation in safety-degrading ways (Norman, 1990; Sarter et al., 1997; Parasuraman, Mouloua, & Molloy, 1994; Lee & Moray, 1994; this Handbook, Chapter 4, 12).

Until vehicle automation has achieved a level of performance to safely enable drivers to assume the role of a passenger, it is assistance only, and requires drivers to monitor and, more importantly, to understand when and how to act to complement its driving task performance. At present, drivers receive support from forms of vehicle automation, but are still needed to perform object/event detection and response (OEDR), act as a fallback for system malfunctions, and fulfill non-automated operational, tactical, or strategic activities (SAE International, 2018). From a “human-automation coordination” perspective (Lee, 2018), drivers must monitor automated systems as well as the traffic environment because vehicle control and crash avoidance technologies have operational and functional design limitations and may malfunction. In sharing the goal with automation to achieve safe driving in an often unpredictable road environment, monitoring requires more than passive observation; drivers must actively supervise ongoing automation performance and the detection of pre-crash conditions. Supervision of automation in any dynamic, uncertain environment involves information integration and analysis, system expertise, analytical decision-making, sustained attention, and maintenance of manual skills (Bhana, 2010; Casner, Geven, Recker, & Schooler, 2014). Effective development and support of these skills helps operators to troubleshoot and recover if automation fails or if something unexpected happens (Onnasch, Wickens, Li, & Manzey, 2014; Wickens, Li, Santamaria, Sebok, & Sarter, 2010).

For drivers, context-specific system limitations can be difficult to understand (Seppelt, 2009; Larsson, 2012); drivers, consequently, may not recognize their need to act until a conflict is imminent. While events where such detection is needed are rare because current forms of many advanced driver assistance systems (ADAS) technology are relatively robust and react to most threats, to the driver, their occurrence essentially represents an ever-present and unpredictable possibility. Clear, unambiguous understanding of system state and behavior is particularly difficult when detection performance is dependent on sensor quality and environmental conditions (i.e., a dynamic operational design domain; Seppelt, Reimer, Angell, & Seaman, 2017). Common, current limitations may include restrictions in operating speed ranges, operational design domain (e.g. geo-fencing), limits in the amount of steering, braking, and acceleration the system can apply, and limitations in lane and object detection performance (e.g. pedestrians, animals, on-road objects, and oncoming vehicles). Additionally, the driver needs to supervise system failures ranging from sensor blockage to brake pump failure, as these may require immediate intervention. Failures, particularly unpredictable ones or those with an unspecified cause, create uncertainty in the interpretation of automation’s behavior and inaccurate mental models. For example, Kazi, Stanton, and Harrison (2004) exposed drivers to different levels of reliability of an adaptive cruise control (ACC) system and found that failures involving random accelerations of ACC led to mental models of its operation that were predominately flawed. Accurate, well-developed mental models are critical to successful interaction between human operators and automated systems; inaccurate mental models lead to operator misunderstandings and inappropriate use.

Of particular interest in this chapter is the influence of mental models on automation use. An operator’s expectations of system behavior are guided by the completeness and correctness of his/her mental model (Sarter et al., 1997). Mental models influence the way people allocate their attention to automated systems (Moray, 1986), and in turn influence their response, in which an operator’s decision to engage or disengage a system reflects his/her prediction of its behavior (Gentner & Stevens, 1983). Mental models clearly impact how people use automation; but, what is a mental model?

3.2  Defining Mental Models

A mental model is an operator’s knowledge of system purpose and form, its functioning, and associated state structure (Johnson-Laird, 1983; Rouse & Morris, 1986). Mental models represent working knowledge of system dynamics and structure of physical appearance and layout, and of causal relationships between and among components and processes (Moray, 1999). Mental models allow people to account for and to predict the behavior of physical systems (Gentner & Stevens, 1983). According to Johnson-Laird (1983), mental models enable people to draw inferences and make predictions, to decide on their actions, and to control system operation. In allowing an operator to predict and explain system behavior, and to recognize and remember relationships between system components and events (Wilson & Rutherford, 1989), mental models provide a source to guide people’s expectations (Wickens, 1984).

Further elaborating on this role of mental models for generating predictions, the predictive processing framework (Clark, 2013) has emerged as a powerful new approach to understanding and modeling mental models (Engström et al., 2018). Within the predictive processing framework, a mental model can be seen as a hierarchical generative model. The hierarchical generative model is embodied in the brain to generate predictions, after learning over time how states and events in the world, or one’s own body, generate sensory input. Predictive processing suggests that frequent exposure to reliable statistical regularities in the driving environment will lead to improvement of generative model predictions and increasingly automatized performance. Further, failures may be understood in terms of limited exposure to functional limitations, and therefore, as an inappropriately tuned generative model in such situations.

Consequently, for the driver’s generative model to become better at predicting automation, it needs feedback (sensory input) on statistical regularities. Automation can be designed to provide transparent feedback on the state of the automation and has been shown to improve performance (Bennett & Flach, 2011; Flemisch, Winner, Bruder, & Bengler, 2014; Seppelt & Lee, 2007; 2019; Vicente & Rasmussen, 1992). Dynamic human–machine interface (HMI) feedback can provide statistical regularities of sensory input that the model can tune to. If designed to continuously communicate to drivers the evolving relationship between system performance and operating limits, HMIs can support the development of a generative model to properly predict what the automation can and cannot do (see e.g., this Handbook, Chapter 15). It is in understanding both the proximity to and type of system limitation (e.g., sensor range, braking capability, functional velocity range, etc.) that drivers are able to initiate proactive intervention in anticipation of automation failure (Seppelt & Lee, 2019).

3.3  Mental Models under Uncertainty

Mental models often include approximations and incompleteness because they serve as working heuristics (Payne, 1991). In functioning as heuristics, meant to enable operators to construct simplified representations of complex systems in order to reduce the amount of mental effort required to use them, mental models are notoriously incomplete, unstable, non-technical, and parsimonious (Norman, 1983). Further, drivers often adopt a satisficing strategy rather than an optimal one in trying to satisfy their needs, and are influenced by motivational and emotional factors (Boer & Hoedemaeker, 1998; Summala, 2007). Thus, there is often no single conceptual form that can be defined as accurate and complete. The merit of a mental model, therefore, is not based on its technical depth and specificity, but in its ability to enable a user to accurately (or satisfactorily) predict the behavior of a system.

The means of organizing knowledge into structured patterns affords rapid and flexible processing of information that translates into rapid responses such as the decision to rely on the automation when operating an automated system (Rumelhart & Ortany, 1977). Mental models are constructed from system-provided information, the environment, and instructions (Norman, 1986). They depend on and are updated from the feedback provided to operators (e.g., as from proprioceptive feedback or in information displays). The accuracy of mental models depends on the effectiveness of system interfaces (Norman, 1988) and the variety of situations encountered. Inaccuracies in mental models are more likely the more unreliable the automation (Kazi et al., 2004). Mental model accuracy may be improved with feedback that informs the operator of the automation’s behavior in a variety of situations (Stanton & Young, 2005). For operators who interact with automation that in turn interacts with the environment, such as drivers, it is important that their mental models of the automated system include an understanding of the environment’s effect on its behavior. For judgments made under uncertainty, such as the decision to rely on automated systems that control all or part of a complex, dynamic process, two types of cognitive mechanisms—intuition and reasoning—are at work (Kahneman & Frederick, 2002; Stanovich & West, 2002). Intuition (System 1) is characterized by fast, automatic, effortless, and associative operations—those similar to features of perceptual processes. Reasoning (System 2) is characterized by slow, serial, effortful, deliberately controlled, and often relatively flexible and rule-governed operations. System 1 generates impressions that factor into judgments while System 2 is involved directly in all judgments, whether they originate from impressions or from deliberate reasoning.

Consequently, under uncertainty or under time pressure, mental models are subject to cognitive biases—systematic patterns of deviation from norm or rationality in judgment (Haselton, Nettle, & Andrews, 2005). Although there are a large variety of cognitive biases, examples of important biases affecting mental models of automation include

  • Confirmation bias—the tendency to search for, interpret, focus on, and remember information in a way that confirms one’s preconceptions (Oswald & Grosjean, 2004).
  • Anchoring bias—the tendency to rely too heavily, or “anchor,” on one trait or piece of information when making decisions (Zhang, Lewis, Pellon, & Coleman, 2007).
  • Overconfidence bias—excessive confidence in one’s own answers to questions (Hilbert, 2012).
  • Gambler’s fallacy—the mistaken belief that if something happens more frequently than normal during a given period, it will happen less frequently in the future (or vice versa) (Tversky & Kahneman, 1974).

Thus, it can be expected that drivers may use biased, uncertain, or incomplete knowledge when operating automation. Good automated system designs should strive to minimize the effect of these known human biases and limitations, and to measure the accuracy of mental models.

3.4  General and Applied Mental Models

Cognitivism theories (e.g., Shiffrin & Schneider, 1977; Rumelhart & Norman, 1981) commonly reference two types of knowledge, which are relevant to how we define mental models: (1) Declarative knowledge: knowledge that is acquired by education; and that can be literally expressed; it is the knowledge of something; and (2) Procedural knowledge: knowledge that is acquired by practice; and that can be applied to something; it is the knowledge of how to do something.

In Figure 3.1, the declarative knowledge type corresponds to the “General Mental Model” and the procedural knowledge type corresponds to the “Applied Mental Model.” A general mental model includes general system knowledge (i.e., understanding prior to interaction). An applied mental model is a dynamic, contextually-driven understanding of system behavior and interaction (i.e., understanding during interaction).

Conceptual model of the influence of a driver’s mental model on
                        automation reliance.

Figure 3.1   Conceptual model of the influence of a driver’s mental model on automation reliance.

Prior to the use of an automated system, a driver’s initial mental model is constructed based on a variety of information sources, which may include a vehicle owner’s manual, perceptions of other related technologies, word-of-mouth, marketing information, etc. This information is “general” in the sense that it may contain a basic understanding of system purpose and behavior, as well as a set of operating rules and conditions for proper use, but does not prescribe how the driver should behave specific to the interaction of system state with environmental condition. For example, a driver may understand as part of his/her “general” mental model that ACC does not work effectively in rainy conditions, but may not have a well-developed “applied” mental model to understand the combination of road conditions and rain densities that result in unreliable detection of a lead vehicle. As a driver gains experience using an automated system in a variety of situations, s/he develops his/her “applied” mental model, which is conceptually consistent with the idea of “situation models” connecting bottom-up environmental input with top-down knowledge structures (Durso, Rawson, & Girotto, 2007). As depicted in Figure 3.1, both “general” and “applied” mental models affect reliance action (also see in this Handbook Chapter 4). A correct “general” mental model is important in the construction of an adequate “applied” model when experiencing concrete situations on the road and for selecting appropriate actions (Cotter & Mogilka, 2007; Seppelt & Lee, 2007). In turn, experience updates the “general” mental model. A mismatch between a driver’s general mental model and experience can negatively affect trust and acceptance (Lee & See, 2004). To help explain drivers’ reliance decisions and vehicle automation use behaviors in the short- and long term, it is necessary to measure mental models.

3.5  Measurement of General and Applied Mental Models

Self-report techniques are commonly used to evaluate and articulate differences in mental models. Operators are asked to explicitly define their understanding of a system via questionnaire or interview (e.g., Kempton, 1986; Payne, 1991). Multiple biases can be introduced in this type of data gathering process relating to social/communication and background/experience from both the participant and the analyst (Revell & Stanton, 2017). However, research on trust in automation has provided a reference for determining which aspects of a general mental model are important for correct system use (Kazi et al., 2007; Makoto, 2012). Healthy, calibrated trust occurs if information on a system’s purpose, process, and performance (PPP) are supplied to drivers (Lee & See, 2004). Recently developed questionnaires probe the driver’s understanding of these information types. Beggiato and Krems (2013) developed a 32-item questionnaire on ACC functionality in specific PPP situations. Questions covered general ACC functionality (e.g., “ACC maintains a predetermined speed in an empty lane”) as well as system limitations described in the owner’s manual (e.g., “ACC detects motorcycles”). Seppelt (2009) developed a 16-item questionnaire assessing participants’ knowledge of the designer’s intended use of ACC (i.e., purpose), of its operational sensing range and event detection capabilities (i.e., process), and of its behavior in specific use cases (i.e., performance). These questionnaires provide a template for assessment of other automated technologies.

As stated, in functioning as heuristics, mental models often have no single conceptual form that can be defined as accurate and complete. It is therefore important to assess a driver’s procedural knowledge in addition to his/her declarative mental model, that is, to evaluate the behavioral and performance output from “running” his/her mental model. Such procedural knowledge, or a driver’s applied mental model, is accessed through analysis of driver behavioral measures, e.g., response time and response quality to events, eye movements, or observation of errors. For an overview of measures of mental models, see Table 3.1.

Table 3.1   Example Measures of General and Applied Mental Models

General Mental Model Measures

  • Questionnaires on purpose, process, and performance (PPP)
  • Automation status ratings (i.e., mental model accuracy)
    • Automated system mode(s)
    • Proximity of current automated system state to its capabilities/limits
    • Expected system response and behavior in specific conditions
  • Trust ratings

Applied Mental Model Measures

  • Monitoring of the driving environment
    • Sampling rate and length to the forward roadway
    • Sampling rate and length to designated safety-critical areas (e.g., to intersections & crosswalks)
  • Trust-related behaviors during use of the automation (e.g., hands hovering near the steering wheel)
  • Presence of safety-related behaviors prior to tactical maneuvers or hazard responses (e.g., glances to rear-view mirror & side mirrors, turn signal use, and over-the-shoulder glances)
  • Secondary task use
  • Level of skill loss: Manual performance after a period of automated driving compared with manual task performance during the same period of time
  • Response time to unexpected events/hazards or system errors/failures
  • Use of automation within vs. outside its operational design domain (ODD)

The set of measures listed in Table 3.1 offers a starting point for how to assess general and applied mental models of vehicle automation. Further research is required to assess whether and to what extent the above measures provide practical insight into driver’s reliance behavior for the diversity of automated technologies and their real-world use conditions. Further knowledge on how behavior changes in the longer term, after extended use of automated systems, as a function of the type and complexity of a driver’s mental model, is also important.

3.6  Supporting Accurate and Complete Mental Models

In order to safely use automated systems, drivers must fundamentally understand their role responsibility on a moment-by-moment basis. In essence, they must be able to accurately translate their general mental model to the applied mental model question of “who is responsible and most capable to react to the present vehicle and situation dynamics: me or the automated system?”. Recent research points to both general confusion about the capabilities of deployed forms of vehicle automation (i.e., inaccurate general mental models; e.g., Abraham, Seppelt, Mehler, & Reimer, 2017) as well as an incomplete understanding and expectations regarding limitations of automation (e.g., Victor et al., 2018). Based on current system design practices, there seems to be a fundamental disconnect between driver’s general and applied mental models. For example, in Victor et al. (2018), drivers were trained prior to use on the limits of highly reliable (but not perfect) automation. However, in practice, they experienced reliable system operation until the final moment of the study. Regardless of the amount of initial training on system limitations, 30% of drivers crashed into the object they “knew” was outside the detection capabilities of the system. Without reinforcement of the general mental model by the dynamic experience, information decayed or was dominated by dynamic learned trust. Consistent with previous findings on the development of mental models of vehicle automation relative to initial information (Beggiato, Pereira, Petzoldt, & Krems, 2015), this study found that system limitations initially described but not experienced tend to disappear from a driver’s mental model. Research to date on mental models of vehicle automation indicate a need to support and/or train driver’s understanding of in situ vehicle limitations and capabilities through dynamic HMI information (Seppelt & Lee, 2019; this Handbook, Chapters 15, 16, 18), routine driver training, and/or using intelligent tutoring systems.

3.7  Conclusion

This chapter described the importance and relevance of mental models in driving and automation, defined how fundamental psychological mechanisms contribute to their formation, provided a new framing of general and applied mental models and how to measure them, and concluded with a review of recent research on how to support accurate and complete mental model development. Future research needs to examine relationships between mental models, trust, and both short- and long-term acceptance across types and combinations of vehicle automation.

References

Abraham, H. , Seppelt, B. , Mehler, B. , & Reimer, B. (2017). What’s in a name: Vehicle technology branding & consumer expectations for automation. Proceedings of the 9th International ACM Conference on Automation User Interfaces and Interactive Vehicular Applications. New York: ACM.
Beggiato, M. & Krems, J. F. (2013). The evolution of mental model, trust and acceptance of adaptive cruise control in relation to initial information. Transportation Research Part F: Traffic Psychology and Behaviour, 18, 47–57.
Beggiato, M. , Pereira, M. , Petzoldt, T. , & Krems, J. (2015). Learning and development of trust, acceptance and the mental model of ACC. A longitudinal on-road study. Transportation Research Part F: Traffic Psychology and Behaviour, 35, 75–84.
Bennett, K. B. , & Flach, J. M. (2011). Display and interface design: Subtle science, exact art. CRC Press.
Bhana, H. (2010). Trust but verify. AeroSafetyWorld, 5(5), 13–14.
Boer, E. R. & Hoedemaeker, M. (1998). Modeling driver behavior with different degrees of automation: A hierarchical decision framework of interacting mental models. In Proceedings of the XVIIth European Annual Conference on Human Decision making and Manual Control, 14–16 December, France: Valenciennes.
Casner, S. M. , Geven, R. W. , Recker, M. P. , & Schooler, J. W. (2014). The retention of manual flying skills in the automated cockpit. Human Factors, 56(8), 1506–1516.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. doi:10.1017/S0140525X12000477
Cotter, S. & Mogilka, A. (2007). Methodologies for the assessment of ITS in terms of driver appropriation processes over time (HUMANIST Project Deliverable 6 of Task Force E).
Dekker, S. W. A. & Woods, D. D. (2002). MABA-MABA or Abracadabra? Progress on human-automation co-ordination. Cognition, Technology and Work, 4, 240–244.
Durso, F. T. , Rawson, K. A. , & Girotto, S. (2007). Comprehension and situation awareness. In F. T. Durso , R. S. Nickerson , S. T. Dumais , S. Lewandowsky , & T. J. Perfect (Eds.), Handbook of Applied Cognition (2nd ed., pp. 163–193). Chichester, UK: John Wiley & Sons.
Engström, J. , Bärgman, J. , Nilsson, D. , Seppelt, B. , Markkula, G. , Piccinini, G. B. , & Victor, T. (2018). Great expectations: A predictive processing account of automobile driving. Theoretical Issues in Ergonomics Science, 19(2), 156–194.
Flemisch, F. , Winner, H. , Bruder, R. , & Bengler, K. (2014). Cooperative guidance, control and automation. In H. Winner , S. Hakuli , F. Lotz , & C. Singer (Eds.), Handbook of Driver Assistance Systems: Basic Information, Components and Systems for Active Safety and Comfort (pp. 1471–1481). Berlin: Springer.
Gentner, D. & Stevens, A. L. (1983). Mental Models. Hillsdale, NJ: Lawrence Erlbaum Associates.
Haselton, M. G. , Nettle, D. , & Andrews, P. W. (2005). The evolution of cognitive bias. In D. M. Buss (Ed.), The Handbook of Evolutionary Psychology (pp. 724–746). Hoboken, NJ: John Wiley & Sons.
Hilbert, M. (2012). Toward a synthesis of cognitive biases: How noisy information processing can bias human decision-making. Psychological Bulletin, 138(2), 211–237.
Johnson-Laird, P. (1983). Mental Models. Cambridge, MA: Harvard University Press.
Kahneman, D. & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich , D. Griffin , & D. Kahneman (Eds.), Heuristics and Biases (pp. 49–81). Cambridge: Cambridge University Press.
Kazi, T. A. , Stanton, N. A. , & Harrison, D. (2004). The interaction between drivers’ conceptual models of automatic-cruise-control and level of trust in the system. 3rd International Conference on Traffic & Transport Psychology (ICTTP). Nottingham, UK: ICTTP.
Kazi, T. , Stanton, N. A. , Walker, G. H. , & Young, M. S. (2007). Designer driving: drivers’ conceptual models and level of trust in adaptive cruise control.
Kempton, W. (1986). Two theories of home heat control. Cognitive Science, 10(1), 75–90.
Larsson, A. F. L. (2012). Driver usage and understanding of adaptive cruise control. Applied Ergonomics, 43, 501–506.
Lee, J. D. (2018). Perspectives on automotive automation and autonomy. Journal of Cognitive Engineering and Decision Making, 12(1), 53–57.
Lee, J. D. & Moray, N. (1994). Trust, self-confidence, and operators’ adaptation to automation. International Journal of Human-Computer Studies, 40, 153–184.
Lee, J. D. & See, K. A. (2004). Trust in technology: Designing for appropriate reliance. Human Factors, 46(1), 50–80.
Makoto, I. (2012). Toward overtrust-free advanced driver assistance systems. Cognition, Technology & Work, 14(1), 51–60.
Moray, N. (1986). Monitoring behavior and supervisory control. In K. R. Boff , L. Kaufman , and J. P. Thomas (Eds.), Handbook of Perception and Human Performance (Vol. 2, Chapter 40). New York: Wiley.
Moray, N. (1999). Mental models in theory and practice. In D. Gopher & A. Koriat , (Eds.), Attention and Performance XVII Cognitive Regulation of Performance: Interaction of Theory and Application. Cambridge: MIT Press.
Norman, D. A. (1983). Some observations on mental models. In D. Gentner & A. L. Stevens (Eds.), Mental Models (pp. 7–14). Hillsdale, NJ: Lawrence Erlbaum Associates.
Norman, D. A. (1986). Cognitive engineering. In D. A. Norman & S. W. Draper (Eds.), User Centered System Design: New Perspectives on Human-Computer Interaction (pp. 31–61). Hillsdale, NJ: Lawrence Erlbaum Associates.
Norman, D. A. (1988). The Psychology of Everyday Things. New York: Basic Books.
Norman, D. A. (1990). The ‘problem’ with automation: Inappropriate feedback and interaction, not ‘over-automation’. Philosophical Transactions of the Royal Society London, Series B, Biological Sciences, 327(1241), 585–593.
Onnasch, L. , Wickens, C. D. , Li, H. , & Manzey, D. (2014). Human performance consequences of stages and levels of automation: An integrated meta-analysis. Human Factors, 56(3), 476–488.
Oswald, M. E. & Grosjean, S. (2004). Confirmation bias. In R. F. Pohl (Ed.), Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement, and Memory (pp. 79–96). Hove, UK: Psychology Press.
Parasuraman, R. , Mouloua, M. , & Molloy, R. (1994). Monitoring automation failures in human-machine systems. In M. Mouloua and R. Parasuraman (Eds.), Human Performance in Automated Systems: Current Research and Trends (pp. 45–49). Hillsdale, NJ: Lawrence Erlbaum Associates.
Payne, S. J. (1991). A descriptive study of mental models. Behavior & Information Technology, 10(1), 3–21.
Revell, K. M. & Stanton, N. A. (2017). Mental Models: Design of User Interaction and Interfaces for Domestic Energy Systems. Boca Raton, FL: CRC Press.
Rouse, W. B. & Morris, N. M. (1986). On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin, 100, 359–363.
Rumelhart, D. E. & Norman, D. A. (1981). Analogical processes in learning. In J. R. Andersen (Ed.), Cognitive Skills and Their Acquisition (pp. 335–359). New York: Psychology Press.
Rumelhart, D. E. & Ortany, A. (1977). The representation of knowledge in memory. In R. C. Anderson & R. J. Spiro (Eds.), Schooling and the Acquisition of Knowledge (pp. 99–135). HillsDale, NJ: Lawrence Erlbaum Associates.
SAE International. (2018). Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (SAE Standard J3016). Warrendale, PA: Society of Automotive Engineers.
Sarter, N. B. & Woods, D. D. (1994). Pilot interaction with cockpit automation II: An experimental study of pilots’ model and awareness of the flight management system. The International Journal of Aviation Psychology, 4(1), 1–28.
Sarter, N. B. , Woods, D. D. , & Billings, C. E. (1997). Automation surprises. In G. Salvendy (Ed.), Handbook of Human Factors & Ergonomics, Second Edition. Hoboken, NJ: Wiley.
Seppelt, B. D. (2009). Supporting Operator Reliance on Automation through Continuous Feedback. Unpublished PhD dissertation, The University of Iowa, Iowa City.
Seppelt, B. D. & Lee, J. D. (2007). Making adaptive cruise control (ACC) limits visible. International Journal of Human-Computer Studies, 65(3), 192–205.
Seppelt, B. D. & Lee, J. D. (2019). Keeping the driver in the loop: Enhanced feedback to support appropriate use of imperfect vehicle control automation. International Journal of Human-Computer Studies, 125, 66–80.
Seppelt, B. D. , Reimer, B. , Angell L. , & Seaman, S. (2017). Considering the human across levels of automation: Implications for reliance. Proceedings of the Driving Assessment Conference: 9th International Symposium on Human Factors in Driver Assessment, Training, and Vehicle Design. Iowa City, IA: University of Iowa, Public Policy Center.
Shiffrin, R. M. & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological Review, 84(2), 127–190.
Summala, H. (2007). Towards understanding motivational and emotional factors in driver behaviour: Comfort through satisficing. In P. C. Cacciabue (Eds.), Modelling Driver Behaviour in Automotive Environments. London: Springer.
Stanovich, K. E. & West, R. F. (2002). Individual differences in reasoning: Implications for the rationality debate. In T. Gilovich , D. Griffin , & D. Kahneman (Eds.), Heuristics and Biases (pp. 421–440). Cambridge: Cambridge University Press.
Stanton, N. A. & Young. M. S. (2005). Driver behavior with adaptive cruise control. Ergonomics, 48(10), 1294–1313.
Tversky, A. & Kahneman, D. (1974). Judgement under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.
Vicente, K. J. & Rasmussen, J. (1992). Ecological interface design: Theoretical foundations. IEEE Transactions on Systems, Man, and Cybernetics, SCM-22(4), 589–606.
Victor, T. W. , Tivesten, E. , Gustavsson, P. , Johansson, J. , Sangberg, F. , & Ljung Aust, M. (2018). Automation expectation mismatch: Incorrect prediction despite eyes on threat and hands on wheel. Human Factors, 60(8), 1095–1116.
Wickens, C. D. (1984). Engineering Psychology and Human Performance. Columbus, OH: Merrill.
Wickens, C. D. , Li, H. , Santamaria, A. , Sebok, A. , & Sarter, N. B. (2010). Stages and levels of automation: An integrated meta-analysis. Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 389–393). Los Angeles, CA: Sage Publications.
Wiener, E. L. (1989). Human Factors of Advanced Technology ("Glass Cockpit") Transport Aircraft (NASA Contractor Report 177528). Mountain View, CA: NASA Ames Research Center.
Wilson, J. R. & Rutherford, A. (1989). Mental models: Theory and application in human factors. Human Factors, 31, 617–634.
Zhang Y. , Lewis, M. , Pellon, M. , & Coleman, P. (2007). A preliminary research on modeling cognitive agents for social environments in multi-agent systems. AAAI Fall Symposium: Emergent Agents and Socialities (p. 116). Retrieved from:https://www.aaai.org/Papers/Symposia/Fall/2007/FS-07-04/FS07-04-017.pdf
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.