Quantitative evaluations are valuable in the strive for improvements and asserting quality. However, the field of Command & Control (C2) evaluations are hard to navigate, and it is difficult to find the correct measurement for a specific situation. A comprehensive Scoping Study was made concerning measurements of C2 performance and effectiveness. A lack of an existing appropriate framework for discussing C2 evaluations led to the development of the Crisis Response Management (CRM) Matrix. This is an analysis tool that assigns measurements into categories, and each category display unique strengths, weaknesses and trends. The analysis yielded results proving to be too rich for a single article, thusly, this is the first of two articles covering the results. In this article, the Practitioners Guide focus on results valuable for someone interested in evaluating C2. Each evaluation has specific requirements that, for best result, ought to be reflected in the chosen measurement.
Objective, easy to use, easy to comprehend, high face-validity assessment methods for measuring shared awareness in teams are hard to find. This paper describes an experiment where a new measure called Shared Priorities, which is based on ranking of self-generated strategic items, is tested. Trained teams were compared to non-trained teams in a dynamic problem-solving task in terms of performance and shared awareness. The shared priorities measure was used alongside other, well-documented measures of team awareness based on self-rating. The results show that the Shared Priorities measure correlate with performance and could also distinguish between trained and non-trained teams. However, the Shared Priorities measure did not correlate with the other team measures, suggesting that it captures a different quality of team work than the self-rating measures. Further, the shared priorities measure was found to be easily administered and gained a high user acceptance.
The purpose of this work in progress paper is to report on the method development of the Shared Prioritiesmeasure to include content analysis, as a way of gaining a deeper understanding of team work incrisis/emergency response. An experiment is reported where the performance of six trained teams is comparedwith the performance of six non-trained teams. The experiment was performed using an emergency responsemicroworld simulation with a forest fire scenario. Dependent measures were simulation performance, the CrewAwareness Rating Scale (CARS), and content analysis. Trained teams performed better and scored higher onmeasures of team behaviors.
A command and control environment is a dynamic and complex setting with complicated technical systems where teams of operators interact to reach shared goals. This study presents an experiment in which we, by means of Structural Equation Modeling (SEM), explain the relations between basic concepts of command and control environments: mental workload, frustration, situational awareness, and performance. This paper reports a LISREL analysis of the Baroutsi, Berggren, Nählinder, & Johansson (2013) data. From that data, a new latent variable “Frustration” emerges, which now can be included in the model.