Situation Awareness, Sociotechnical Systems, and Automation in Emergency Medical Services

Theory and Measurement

Authored by: David Schuster , Dan Nathan-Roberts

Human Factors and Ergonomics of Prehospital Emergency Care

Print publication date:  March  2017
Online publication date:  March  2017

Print ISBN: 9781482242515
eBook ISBN: 9781315280172
Adobe ISBN:




Decision-making, the selection of a choice among alternatives, is of critical importance in EMS. EMS are provided, under great time pressure, with many concurrent, high-stakes activities occurring simultaneously in a high-technology environment of practitioners (either providing care or operating an ambulance), dispatchers, and patients all affecting the outcome. In this environment, patient outcomes depend on timely and accurate human decision-making. Improving the decision-making of highly trained and high-performing professionals is a difficult challenge. In this chapter, we argue that decision-making in EMS can be improved through interventions targeting the immediate precursors of good decision-making. Situation awareness (SA), the degree to which one has actionable, goal-directed knowledge (Rousseau et al., 2004) of elements in the prehospital environment, provides a metric of precursors of effective decision-making in EMS.

 Add to shortlist  Cite

Situation Awareness, Sociotechnical Systems, and Automation in Emergency Medical Services


Decision-making, the selection of a choice among alternatives, is of critical importance in EMS. EMS are provided, under great time pressure, with many concurrent, high-stakes activities occurring simultaneously in a high-technology environment of practitioners (either providing care or operating an ambulance), dispatchers, and patients all affecting the outcome. In this environment, patient outcomes depend on timely and accurate human decision-making. Improving the decision-making of highly trained and high-performing professionals is a difficult challenge. In this chapter, we argue that decision-making in EMS can be improved through interventions targeting the immediate precursors of good decision-making. Situation awareness (SA), the degree to which one has actionable, goal-directed knowledge (Rousseau et al., 2004) of elements in the prehospital environment, provides a metric of precursors of effective decision-making in EMS.

Across domains, SA has been most effectively described as a predictor of quality decision-making in sociotechnical systems. It is distinct, but related to, preexisting knowledge, individual characteristics, workload, situational conditions, and human performance (Durso et al., 2006). That is, SA describes a state of holding relevant knowledge. It describes whether individuals have the knowledge needed in order to perform a task (Endsley, 1995). Consequently, SA provides a way to diagnose decision-making; in the moment, does an individual have the information needed to make the best decision?

Identifying deficiencies in SA and addressing them can lead to better decision-making, which, in turn, will improve patient outcomes. In the prehospital environment, however, individual decisions are not made in a vacuum. In order to positively impact the complex prehospital environment, human factor ­practitioners must consider how healthcare providers᾿ behaviors interact within a complex, dynamic network of distributed team members, patients, technology, culture, and other factors. Thus, EMS is a sociotechnical system. Sociotechnical systems are frameworks that focus on a large system᾿s overall performance through a high-level analysis of the individual components of the system and their interrelatedness as a means of understanding the system᾿s performance as a whole (Holden et al., 2013). EMS can benefit from the structured tools of sociotechnical system analysis to better understand the impact of elements of the larger system on prehospital SA and decision-making.

In this chapter, we describe how EMS can be described as a sociotechnical system, some of the complexities and challenges inherent in sociotechnical systems, and the benefits of this perspective. Next, we summarize the state of the art in SA theory and practice as they apply to EMS. We will illustrate how SA fills a gap in our understanding of how individual human performance affects patient care. At the same time, we will consider the limitations of our current use of SA, both theoretically and in the field. This chapter will conclude with recommendations for the measurement of SA, the use of SA as a performance metric, and methods to predict the impact of interventions on SA.

Ems As A Sociotechnical System

Sociotechnical systems are a way of understanding and improving systems at the large scale, macroergonomic level. Through the use of various high-level frameworks, such as macroergonomic analysis of structure, macroergonomic analysis design, and Systems Engineering Initiative for Patient Safety (SEIPS) 2.0 (Holden et al., 2013), sociotechnical systems provide a structured way to characterize a system or identify the components of the larger system with the most room for improvement. As it relates to EMS, sociotechnical system analysis is useful for looking holistically at the complexity of EMS across functional or organizational boundaries to identify gaps that might not be caught by using a more focused research lens. For example, the larger external cultural environment, such as friction between an administration and a union, may have impacts on the exchange of SA between system parts.

As an example, SEIPS, and its successor, SEIPS 2.0, which are arguably the most widely used sociotechnical models in healthcare, separate a large system into distinct subareas for analysis into work system, processes, and outcomes, with each area subdivided and interrelated to the others. Figure 3.1 shows the SEIPS 2.0 framework. The performance of EMS can be analyzed in terms of outcomes (patient, professional, and organizational) based on the processes used. To provide structure to the analysis of the EMS work system, the people, tools, and technology; organization; and internal environment (including environmental ergonomics, tasks, and external environment, including culture) are all studied as interacting components. Individual components and interactions can be studied using traditional human factors techniques, such as Rapid Upper Limb Assessment (McAtamney & Corlett, 1993) for tasks, or electronic health record usability (National Center for Human Factors in Healthcare, 2015) for technology.

This approach also provides analysis of the interactions among components. For example, poor physical ergonomics along with an internal culture that does not value worker safety may lead to undesirable patient outcomes by negatively impacting the physical processes of collaborative professional–patient work. Generally, the way in which components interact can be studied, as they provide barriers or facilitators to positive outcomes.

Healthcare is difficult to characterize because of the interconnectedness, varying levels of hierarchy, criticality of the temporal aspects, distributed nature of the teams, highly variable workload, and highly variable problems/procedures. Sociotechnical systems are often categorized by the tightness of coupling (how closely a change in one area affects a change in another area) and the level of criticality (how dangerous errors can be). For example, automotive manufacturing is a tightly coupled system, but the danger to human health of delaying the production line is rather low compared to healthcare. Conversely, sanitation departments play a critical role in our society, but they are not tightly coupled with the rest of the system. Unlike the automotive, sanitation, or energy domains, which also are heavy users of sociotechnical systems, healthcare is a very tightly coupled system with high criticality. However, this also means that healthcare has the most to gain from improvements found by reducing sociotechnical barriers. Work systems are often thought of as colocated and synchronized temporally. EMS teams can operate as distributed networks, with synchronous and asynchronous aspects. This difference from traditional inpatient care or traditional outpatient clinics makes EMS an even more tightly coupled, high-criticality system to study.

To properly analyze a system by using a sociotechnical framework, it is necessary to draw boundaries of analysis around the EMS or around the system under analysis. External to the EMS would be considered the external environment in a system model, and while it is proximal, it should be studied in sufficient detail only to provide information on the work system. In general, sociotechnical system analysis is not a panacea or recipe for analysis, but a holistic lens which can guide practi­tioners toward other human factors tool sets that can be used to examine and improve human decision-making. Next, we consider SA as one tool for sociotechnical system analysis.

FIGURE 3.1   SEIPS 2.0 framework. (Reprinted from Holden, R.J. et al., Ergonomics, 56(11), 1669–1686, 2013. With permission.)

Situation Awareness

SA was a concept described by fighter pilots before it was studied scientifically (Harwood et al., 1988). In general, SA describes goal-directed knowledge held by a decision maker, such as an emergency medical technician (EMT) (Rousseau et al., 2004). SA is distinguished from generalized knowledge in that it applies to the current task environment. SA is further distinguished from everything that could be known within an environment by the constraint that knowledge must support a goal. For example, a particular color of paint was used on the exterior of an ambulance, but such information is of no use in the care and transport of a patient. Thus, SA is defined by the goals of the individuals in the present situation. Measuring SA, then, first involves identification of all the possible goal-directed knowledge. The proportion of this knowledge held by an individual is SA.

Endsley᾿s Model

Although there are competing models with various degrees of overlap, Endsley᾿s (1995) model is dominant. It describes SA at three hierarchical but nonlinear levels (Endsley, 2015a). Level 1 is the perception of relevant elements in the environment. At level 1 SA, an individual attends to individual pieces of relevant information. Level 2 SA is the comprehension of the current situation; at this level, an individual connects pieces of information together in order to apply elements to the situational context. Level 3 SA is the projection of future status; it is knowledge of the state of elements in the future.

Team SA

The success of SA as a construct of applied cognition has come with challenges. While it has found particular success in aviation settings, a continued challenge has been the application of SA to teamwork. Because teamwork is more than the sum of its parts (Durso & Sethumadhavan, 2008) and teamwork in a sociotechnical system is affected by work system components, several approaches to measuring team cognition exist. In describing team-level SA, two approaches have received significant attention in the literature.

First is Endsley᾿s concept of team SA. Team SA has been defined as “the degree to which every team member possesses the SA needed for his or her job” (Endsley, 1995, p. 39). It suggests a largely additive process in which individual practitioners are contributors to the team through a process of building individual SA. Shared SA is the overlap of the SA of individuals; it is where the SA of multiple individuals is equivalent. Shared SA is “the degree to which team members have the same SA on shared SA requirements” (Endsley & Jones, 2001, p. 48).

In this model, SA is held exclusively within human team members. This model extends beyond individual SA in the mechanisms by which individuals build SA. Team process, identified by Salas et al. (1995), is a component of teamwork. Team process includes the communication and coordination behaviors that team members engage in as they perform taskwork. Endsley and Jones (2001) extended this work by identifying the requirements, devices, processes, and mechanisms of team SA. Communication, shared environments, and shared displays are the devices of team SA (Endsley & Jones, 2001). A limitation of this approach is that it can be difficult to model the complex impact of automation and other elements of the sociotechnical system if they do not cause an observable change in one individual᾿s SA.

Distributed SA

Stanton et al. (2006) argued for a different approach. Their concept of distributed SA treats SA as an artifact of the sociotechnical system. In this view, SA can be held in technological artifacts as well as by human team members. An example, such as that of a pulse oximeter, demonstrates the differences between these two perspectives. Is the status of the pulse oximeter sensor critical to one of the EMTs, or is it sufficient for the monitor to run without direct attention? Assuming that the oximeter provides an alert, is perfectly accurate, and is not being read, information held in the monitor would not be part of team SA. It would, however, be considered a part of distributed SA in that it is relevant information held by a technological agent (the meter). When the EMT must attend to an alert or read the meter, this information becomes part of that EMTs individual SA or, when told to another team member, part of shared SA. Under the distributed SA perspective, reading the value from the meter causes that information to be shared between the technological agent and the EMT. Endsley (2015a, p. 26) argued against a distributed approach to SA, suggesting that only automation that is a “cognizant and independent decision maker” could be considered to have SA. Since the technology used in ambulances is becoming ever more sophisticated and decisions in sociotechnical systems are so tightly coupled, this line is not a clear one. Distributed SA is attractive because it provides a way to describe the benefits of automating information processing. However, distributed SA measures the degree to which information is managed within the sociotechnical system.

We repeat an argument indicating that these perspectives may be partially reconciled by quantifying both overlapping and nonoverlapping information and allowing for agents to be diverse in their information processing (Cain & Schuster, 2014; Ososky et al., 2012). That is, both people and technology may contribute to SA, but cognition is a unique property of people. Individual SA describes the knowledge held by one individual, with some of this information shared by others. Individual SA that overlaps is shared; individual SA held by one agent, human or technological, but relevant to others is complementary SA.

While it is important for researchers and practitioners to understand the theoretical complexity surrounding SA, much of the theoretical debate is of limited relevance to the practitioner. What is critical, regardless of the theoretical approach, is that SA reflects goal-directed knowledge. The proper definition of goals is critical. If goals include good decision-making on the part of an individual human, then an individual᾿s SA must be defined based on the information needed for individual decision-making. However, we have made a case for a sociotechnical system approach to SA in EMS regardless of the theoretical approach.

Issues In Ems Decision-Making And Recommendations

Issue: SA Definition Confusion and Multiple Measurement Techniques

Both SA and human performance have been defined in many ways, leading to confusion about the use of SA. Parasuraman et al. (2008) argued that SA is distinct from generalized knowledge, task performance, and the quality of decision-making. They suggested that SA is distinctly useful in atypical circumstances. In post hoc investigations of accidents in domains such as aviation and driving, SA has been frequently found to be a contributing factor (Jones & Endsley, 1996; Durso & Sethumadhavan, 2008). Some evidence for this claim is found in empirical work examining differences between experts and novice pilots. Individual SA is a construct with well-established predictive validity (Durso et al., 2006) and a widely cited model, but it is difficult to model in general terms without context (Wickens, 2015). In EMS, Endsley᾿s model can be used to diagnose the cognitive performance of practitioners within the sociotechnical system despite debates about the meaning and validity of subcomponents of Endsley᾿s model (e.g., Flach, 1995; Hoffman, 2015). Therefore, practitioners should weigh the relatively small effort of conducting goal-directed task analysis against the utility of capturing goal-relevant knowledge, since SA predicts a variety of diverse performance outcomes. While it is tempting to identify everything that could possibly be known in an environment and reward those who hold the greatest amount of information during task performance, SA is an appropriate metric only when it targets specific goals rather than all possible outcomes. Appropriate measurement SA should focus exclusively on the knowledge needed to immediately achieve such goals. The goal-oriented nature of SA distinguishes it from trivia.

There are a variety of measurement techniques available. The situation awareness global assessment technique (SAGAT) is an example of a measurement technique in which a task or simulation is paused and then individuals are prompted to answer questions that sample SA (Endsley, 1988, 2000). SPAM, the situation present assessment method (Durso & Dattel, 2004), is a variation of this technique in which questions are made available randomly with a signal. The individual pushes a button to view and respond to the question as soon as they are able, but the task environment continues uninterrupted. Questions could be offered on a tablet or other mobile device. Neither SAGAT nor SPAM measurement would be suitable in an operational environment, but both are well suited to simulation and training environments.

SPAM offers a way to simultaneously measure workload; question response latency can be used as a proxy measure of workload. When operators are under high demand, they may take longer to respond to the probe questions (Durso et al., 2006). Quantifying the time to question can provide a supplementary workload measure without requiring a separate questionnaire, although this method may be less sensitive than other assessments of workload (Pierce et al., 2008).

SA measures that are not domain specific also exist, but they suffer from validity issues. The most common of these, the situation awareness rating technique (SART) (Taylor, 1990) is a self-report questionnaire that asks an individual to report their levels of awareness along several dimensions: supply of attentional resources, demand on attentional resources, and understanding of the present situation. It is administered after task performance. If an individual lacks SA but does not know what they do not know, they may rate their SA highly. Further, high-performing individuals may be better able to think about their own SA losses and could rate themselves lower than our oblivious individual. In this way, SART is more of a metacognitive measure than an SA measure. Empirical data have provided support to these theoretical criticisms (Endsley, 1988). Although the SART is the least costly measure to implement, the validity issues make it less useful for operational use in EMS.

Recommendation: Unless such measures already exist, use goal-directed task analysis to facilitate the use of domain-specific measures of SA. Goal-directed task analysis is a method for describing the knowledge requirements in a situation (Endsley & Jones, 2012). This method requires access to high-performing individuals to identify their goals and subgoals. The result of this analysis can be represented as a hierarchical list of information requirements that can be classified at the three levels of SA. This technique can also identify shared SA requirements (Bolstad et al., 2002). It is important that a continuum of expertise is represented in goal-directed task analysis, as experience brings more efficient use of working memory and better strategies (Endsley, 2015b).

Goal-directed task analysis is a prerequisite for defining ideal SA in a new environment. While goal-directed task analysis results in a hierarchical list of information without description of how that information is obtained, it is inherently bound to an operational context. Consequently, SA measures may need revision as prehospital teams vary in their composition or as technology affects what knowledge must be held by team members in the prehospital environment.

Issue: Individual SA Provides an Incomplete Picture

Individual SA will likely predict the quality of patient care, but it is not the whole story. To properly capture SA, practitioners and researchers should take care to broadly consider SA in the sociotechnical system. Because SA is inherently tied to tasks, the boundaries of the sociotechnical system (i.e., the situation) must be carefully delineated. In EMS, the activities within the ambulance are at one point along the care continuum but exclude events related to dispatch and transfer to the emergency department. Within the situation, individual decision makers should be identified. Individual decision makers will include EMS practitioners, but they should also include other human team members such as physicians and dispatchers. Additionally, the contribution of any technology capable of decision-making should also be included.

Recommendation: Capture team process to contextualize individual SA metrics. Despite theoretical debates, application of Endsley᾿s (1995) model is likely to be a useful approach. However, an analysis that identifies gaps in individual SA can be misleading without identifying deficiencies in team process behaviors, including how information is communicated (Cooke et al., 2007). As members of a team, loss of SA by an individual is unlikely to be an isolated problem. Both the antecedents of SA loss and the impacts of SA loss are intertwined with other elements of the sociotechnical system. Consequently, the goal should not be to single out low-performing individuals but to understand how individual performance is hindered or facilitated by elements of the sociotechnical system.

At one end of a spectrum, the simplest approach to measurement is quantitative assessment of the SA provided by human team members and technological decision aids. Individual SA measurement can be augmented by quantitative or qualitative analysis of team process behaviors. At the other end of this spectrum is a distributed approach to SA measurement, which focuses to the greatest degree on the interactions among elements in the sociotechnical system. A distributed approach to SA measurement may be possible with a well-defined situation model and a thorough understanding of the contribution of technology to individual SA. At present, empirical work is still needed to support a comprehensive understanding of EMS as a sociotechnical system.

Issue: Automation in the Ambulance Can Have Unanticipated Impacts on SA

Since its inception, research on SA has been applied to human–technology interaction, starting with the aviation cockpit. Other domains in which SA has been effectively applied frequently include automated tools. EMS is no exception with increasingly sophisticated medical equipment becoming a part of prehospital care. An example is the increasing computerization of medical equipment and the use of diagnostic aids, such as a recent National Institutes of Health Stroke Scale Assessment administered on a tablet (Padrick et al., 2015).

In other domains, empirical results have suggested, on the surface, that automated tools reduce workload and increase performance. Subsequent research has suggested that SA moderates this relationship (Wickens et al., 2010). However, high levels of automation can lead to a loss of SA (Endsley & Kiris, 1995). Consequently, SA is useful in diagnosing the impact of automation on decision-making performance. Several recurring issues in human–automation interaction are common in sociotechnical systems across domains. Fortunately, an understanding of the potential issues introduced by automation can inform automation design to minimize their impact.

First, automation may increase workload by placing additional demands on the practitioner. In doing so, SA is likely to be reduced (Vortac et al., 1993). Since critical decisions are regularly made under conditions of high mental and physical workload, it is important that new tools do not place an additional burden on already taxed resources. Another issue is the out-of-the-loop performance problem (Endsley & Kiris, 1995), which characterizes a situation of high-performing, but imperfect automation. Automation can be imperfect by occasionally failing outright, such as in the case of a malfunctioning sensor that reports either no value or an incorrect value. Automation can also be imperfect in that it fails to be usable in complex, high-workload situations. In either case of imperfect automation, human operators find themselves largely able to rely on the automation, except during rare occurrences. In these rare occurrences, humans are called “back into the loop” and must take over physical or cognitive tasks typically performed by the automation. People are particularly bad at a rapid, unanticipated shift to manual control. The result is a loss of SA.

Recommendations: Avoid EMS automation that overpromises and underdelivers. Provide training to users of automation on the true capabilities and limitations of automation to encourage appropriate trust. Train for mitigation procedures under automation failure.

Notably, such a loss of SA might not be observed in the case of poorly functioning automation. When working with automation, people base trust attributions on their perceptions of automation performance (Johnson et al., 2009; Muir & Moray, 1996; Oleson et al., 2011, p. 176). Automation which is known to be unreliable may be relied upon less (Yeh & Wickens, 2001). Although this may seem to mitigate the out-of-the-loop performance problem, it may be more accurately described as a “stuck-in-the-loop” performance problem. Such automation is either appropriately disregarded, in which case it provides no benefit, or inappropriately underutilized, a condition called disuse (Parasuraman & Riley, 1997). When human operators of otherwise useful automation elect not to use it, the problem is a lack of trust in automation (Lee & See, 2004; Parasuraman & Riley, 1997). Trust in automation is “the attitude that an agent will help achieve an individual᾿s goals in a situation characterized by uncertainty and vulnerability” (Lee & See, 2004, p. 54). This state of affairs can become evident during goal-directed task analysis. Experts, as part of task analysis, may identify technology that is rarely used as intended due to its perception as being ineffective. When trust is appropriately calibrated, it is adaptive. Trust allows people to appropriately use automated tools.

To encourage appropriate calibrated trust, it is important that the performance of automation in the prehospital environment matches practitioner perceptions of its performance. All other factors being equal, trust in automation will be higher with higher levels of automation reliability. Training that overstates the capabilities of automation may have the effect of reducing use of the automation. For many forms of automation, perfect reliability is not possible. If automation can require unanticipated human intervention, especially during periods of time pressure and high workload, practitioners should be trained for these scenarios to minimize the effects of the out-of-the-loop performance problem.


Our aim in this chapter was to connect EMS to decades of research on SA. As a sociotechnical system, measurement of SA in EMS can be more complex than in other domains. However, several theoretical perspectives and approaches are available to practitioners using SA as a performance metric. Whether using a team SA or distributed SA approach, measurement of SA can capture the quality of teamwork and the emergent cognition of the sociotechnical system. However, this approach requires the greatest understanding of how the diverse elements in the sociotechnical system (including team members, patients, and technologies) interact. Alternatively, a less nuanced approach to SA can incorporate individual SA measures along with considerations of other components of the sociotechnical system, such as human–automation interaction. Although each provides only one piece, together they can diagnose the precursors of performance in EMS.


Bolstad, C. A. , Riley, J. M. , Jones, D. G. & Endsley, M. R. (2002). Using goal directed task analysis with Army brigade officer teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 472–476. Santa Monica, CA: Human Factors and Ergonomics Society.
Cain, A. A. & Schuster, D. (2014). Measurement of situation awareness among diverse agents in cyber security. Proceedings of the IEEE International Inter-disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 124–129. San Antonio, TX: Institute of Electrical and Electronics Engineers.
Cooke, N. J. , Gorman, J. & Winner, J. (2007). Team cognition. In F. Durso , R. Nickerson , S. Dumais , S. Lewandowsky & T. Perfect (Eds.), Handbook of applied cognition (2nd ed., pp. 239–268). Hoboken, NJ: Wiley.
Durso, F. T. , Bleckley, M. K. & Dattel, A. R. (2006). Does situation awareness add to the validity of cognitive tests? Human Factors, 48(4), 721–733.
Durso, F. T. & Dattel, A. R. (2004). SPAM: The real-time assessment of SA. In S. Banbury & S. Tremblay (Eds.), A cognitive approach to situation awareness: Theory and application (pp. 137–154). Aldershot, UK: Ashgate.
Durso, F. T. & Sethumadhavan, A. (2008). Situation awareness: Understanding dynamic environments. Human Factors, 50(3), 442–448.
Endsley, M. R. (1988). Design and evaluation for situation awareness enhancement. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 97–101. Santa Monica, CA: Human Factors and Ergonomics Society.
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32–64.
Endsley, M. R. (2000). Direct measurement of situation awareness: Validity and use of SAGAT. In M. R. Endsley & D. J. Garland (Eds.), Situation awareness analysis and measurement (pp. 147–173). Mahwah, NJ: Lawrence Erlbaum Associates.
Endsley, M. R. (2015a). Situation awareness misconceptions and misunderstandings. Journal of Cognitive Engineering and Decision Making, 9(1), 4–32.
Endsley, M. R. (2015b). Final reflections: Situation awareness models and measures. Journal of Cognitive Engineering and Decision Making, 9(1), 101–111.
Endsley, M. R. & Jones, D. G. (2012). Designing for situation awareness: An approach to human-centered design (2nd ed.). London: Taylor & Francis.
Endsley, M. R. & Jones, W. M. (2001). A model of inter- and intrateam situation awareness: Implications for design, training and measurement. In M. McNeese , E. Salas & M. Endsley (Eds.), New trends in cooperative activities: Understanding system dynamics in complex environments (pp. 46–67). Santa Monica, CA: Human Factors and Ergonomics Society.
Endsley, M. R. & Kiris, E. O. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(2), 381–394.
Flach, J. M. (1995). Situation awareness: Proceed with caution. Human Factors, 37, 149–157.
Harwood, K. , Barnett, B. & Wickens, C. (1988). Situational awareness: A conceptual and methodological framework. Paper presented at the meeting of the 11th Symposium of Psychology in the Department of Defense, Colorado Springs, CO.
Hoffman, R. (2015). Origins of situation awareness: Cautionary tales from the history of concepts of attention. Journal of Cognitive Engineering and Decision Making, 9(1), 73–83.
Holden, R. J. , Carayon, P. ., Gurses, A. P. , Hoonakker, P. , Hundt, A. S. , Ozok, A. A. & Rivera-Rodriguez, A. J. (2013). SEIPS 2.0: A human factors framework for studying and improving the work of healthcare professionals and patients. Ergonomics, 56(11), 1669–1686.
Johnson, R. C. , Saboe, K. N. , Prewett, M. S. , Coovert, M. D. & Elliott, L. R. (2009). Autonomy and automation reliability in human–robot interaction: A qualitative review. Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting, 53, 1398–1402. Santa Monica, CA: Human Factors and Ergonomics Society.
Jones, D. G. & Endsley, M. R. (1996). Sources of situation awareness errors in aviation. Aviation, Space, and Environmental Medicine, 67(6), 507–512.
Lee, J. D. & See, K. A. (2004). Trust in automation and technology: Designing for appropriate reliance. Human Factors, 46(1), 50–80.
McAtamney, L. & Corlett, E. N. (1993). RULA: A survey method for the investigation of work-related upper limb disorders. Applied Ergonomics, 24(2), 91–99.
Muir, B. M. & Moray, N. (1996). Experimental studies of trust and human intervention in a process control simulation. Ergonomics, 39(3), 429–460.
National Center for Human Factors in Healthcare. (2015). EHR User-Centered Design Evaluation Framework. Retrieved from
Oleson, K. E. , Billings, D. R. , Kocsis, V. , Chen, J. Y. C. & Hancock, P. A. (2011). Antecedents of trust in human–robot collaborations. IEEE International Multi-disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 175–178. Miami, FL: Institute of Electrical and Electronics Engineers.
Ososky, S. , Schuster, D. , Jentsch, F. , Fiore, S. , Shumaker, R. , Lebiere, C. et al. (2012). The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates. Baltimore, MD: International Society for Optics and Photonics.
Padrick, M. M. , Chapman Smith, S. N. , McMurry, T. L. , Mehndiratta, P. , Chee, C. Y. , Gunnell, B. S. et al. (2015). NIH stroke scale assessment via iPad-based mobile telestroke during ambulance transport is feasible: Pilot data from the Improving Treatment with Rapid Evaluation of Acute Stroke via mobile telemedicine (iTREAT) study. Stroke, 46(Suppl 1), A90.
Parasuraman, R. & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253.
Parasuraman, R. , Sheridan, T. B. & Wickens, C. D. (2008). Situation awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs. Journal of Cognitive Engineering and Decision Making, 2(2), 140–160.
Pierce, R. S. , Vu, K.P. L. , Nguyen, J. & Strybel, T. Z. (2008). The relationship between SPAM, workload, and task performance on a simulated ATC task. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 34–38. Santa Monica, CA: Human Factors and Ergonomics Society.
Rousseau, R. , Tremblay, S. & Breton, R. (2004). Defining and modeling situation awareness: A critical review. In S. Banbury & S. Tremblay (Eds.), A cognitive approach to situation awareness: Theory and Application (pp. 3–21). Burlington, VT: Ashgate.
Salas, E. , Prince, C. , Baker, D. P. & Shrestha, L. (1995). Situation awareness in team performance: Implications for measurement and training. Human Factors, 37(1), 123–136.
Stanton, N. A. , Stewart, R. , Harris, D. , Houghton, R. J. , Baber, C. , Mcmaster, R. et al. (2006). Distributed situation awareness in dynamic systems: Theoretical development and application of an ergonomics methodology. Ergonomics, 49(12–13), 1288–1311.
Taylor, R. M. (1990). Situational awareness rating technique (SART): The development of a tool for aircrew systems design. (AGARD-CP-478) Neuilly Sur Seine, France: NATO-AGARD.
Vortac, O. U. , Edwards, M. B. , Fuller, D. K. & Manning, C. A. (1993). Automation and cognition in air traffic control: An empirical investigation. Applied Cognitive Psychology, 7(7), 631–651.
Wickens, C. D. (2015). Situation awareness: Review of Mica Endsley᾿s 1995 articles on situation awareness theory and measurement. Human Factors, 50(3), 397–403.
Wickens, C. D. , Li, H. , Sebok, A. & Sarter, N. B. (2010). Stages and levels of automation: An integrated meta-analysis. Proceedings of the Human Factors and Ergonomics Society 54th Annual Meeting, 54, 389–393. Santa Monica, CA: Human Factors and Ergonomics Society.
Yeh, M. & Wickens, C. D. (2001). Display signaling in augmented reality: Effects of cue reliability and image realism on attention allocation and trust calibration. Human Factors, 43(3), 355–365.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.