Assessing human factors during simulation: The development and preliminary validation of the rescue assessment tool

Unsworth, John, Melling, Andrew, Allan, Jaden, Tucker, Guy and Kelleher, Michael (2014) Assessing human factors during simulation: The development and preliminary validation of the rescue assessment tool. Journal of Nursing Education and Practice, 4 (5). ISSN 1925-4040

3894-14081-1-PB.pdf - Published Version
Available under License Creative Commons Attribution.

Download (159kB) | Preview
Official URL:


Background: Failure to rescue the deteriorating patient is a concern for all healthcare providers. In response to this problem providers have introduced a range of interventions to promote timely rescue. Human factors and non-technical skills play a part in both the recognition of ill patients and in the delivery of interventions associated with their successful rescue. Given the risks to patient safety which failure to rescue raises, simulation provides a vehicle for staff training and development in terms of both technical and non-technical skills. This paper describes the development and preliminary validation of a human factors rating tool specifically designed to assess the non-technical skills associated with the recognition and rescue of the deteriorating patient.

Methods: Using high fidelity simulation scenarios related to patient deterioration Faculty independently rated student performance. Scoring took place using video footage of the students’ performance. Data were analyzed to establish the validity of the tool, internal consistency between categories and elements and inter-rater reliability.

Results: Content validity was established through a process of review and by checking for duplicate or redundant items. The internal consistency of the tool was acceptable with a Cronbach’s alpha of 0.84. Factor analysis suggested that the tool assessed only two components rather than the three hypothesized during tool development. The components were labelled as recognizing and responding and leading and reassuring. Inter-rater reliability was initially poor at 0.21 but following training of raters this rose to above 0.8 for two videos related to the same scenario one which had been used during training. However, when the scenario changed the reliability dropped to 0.5.

Conclusions: Rescue appears to be a well-structured tool with good levels of inter-rater reliability following intensive training related to the specific scenario being scored. Further work is required to establish all aspects of construct validity and to ensure test-retest reliability.

Item Type: Article
Subjects: B900 Others in Subjects allied to Medicine
Department: Faculties > Health and Life Sciences > Nursing, Midwifery and Health
Depositing User: Becky Skoyles
Date Deposited: 07 Jul 2014 09:19
Last Modified: 17 Dec 2023 15:17

Actions (login required)

View Item View Item


Downloads per month over past year

View more statistics