Chat with us, powered by LiveChat Discussion Board Question. (Hospital Emergency Management Planning (Exercises and Training) | Abc Paper
+1(978)310-4246 credencewriters@gmail.com
  

– Reading:Reilly, M., &Markenson, D. S. (2010). Health Care Emergency Management: Principles and PracticeChapter 6: Introduction to Exercise Design and EvaluationChapter 8: Education and Training Emergency management principles and practices for healthcare systems (2006).Kaji, A, Langford, V, Lewis,R (2008) Assessing Hospital Disaster Preparedness: A Comparison of an On-Site Survey, Directly Observed Drill Performance, and Video Analysis of Teamwork, Annals of Emergency Medicine V52, No3, 195-201Assessing Hospital Disaster Preparedness.pdf- Discussion Board Question?* What are some of the biggest challenges in developing and implementing a preparedness exercise in a hospital setting? * What differences/similarities exist between hospital and municipal preparedness exercises?- APA style- Two references at least.- Use your critical thinking and experience not only summarize.- Answer all questions clearly.- I have attached two examples of answers.
assessing_hospital_disaster_preparedness.pdf

assessing_hospital_disaster_preparedness.pdf

example_of_answers1.docx

example_of_answers2.docx

Unformatted Attachment Preview

DISASTER MEDICINE/ORIGINAL RESEARCH
Assessing Hospital Disaster Preparedness: A Comparison of an
On-Site Survey, Directly Observed Drill Performance, and Video
Analysis of Teamwork
Amy H. Kaji, MD, MPH
Vinette Langford, RN, MSN
Roger J. Lewis, MD, PhD
From the Department of Emergency Medicine, Harbor–UCLA Medical Center, Los Angeles, CA (Kaji,
Lewis); David Geffen School of Medicine at UCLA, Torrance, CA (Kaji, Lewis); Los Angeles Biomedical
Research Institute, Torrance, CA (Kaji, Lewis); The South Bay Disaster Resource Center at
Harbor–UCLA Medical Center, Los Angeles, CA (Kaji); and MedTeams and Healthcare Programs Training
Development and Implementation, Dynamics Research Corporation, Andover, MA (Langford).
Study objective: There is currently no validated method for assessing hospital disaster preparedness.
We determine the degree of correlation between the results of 3 methods for assessing hospital disaster
preparedness: administration of an on-site survey, drill observation using a structured evaluation tool, and
video analysis of team performance in the hospital incident command center.
Methods: This was a prospective, observational study conducted during a regional disaster drill,
comparing the results from an on-site survey, a structured disaster drill evaluation tool, and a video
analysis of teamwork, performed at 6 911-receiving hospitals in Los Angeles County, CA. The on-site
survey was conducted separately from the drill and assessed hospital disaster plan structure, vendor
agreements, modes of communication, medical and surgical supplies, involvement of law enforcement,
mutual aid agreements with other facilities, drills and training, surge capacity, decontamination capability,
and pharmaceutical stockpiles. The drill evaluation tool, developed by Johns Hopkins University under
contract from the Agency for Healthcare Research and Quality, was used to assess various aspects of
drill performance, such as the availability of the hospital disaster plan, the geographic configuration of the
incident command center, whether drill participants were identifiable, whether the noise level interfered
with effective communication, and how often key information (eg, number of available staffed floor,
intensive care, and isolation beds; number of arriving victims; expected triage level of victims; number of
potential discharges) was received by the incident command center. Teamwork behaviors in the incident
command center were quantitatively assessed, using the MedTeams analysis of the video recordings
obtained during the disaster drill. Spearman rank correlations of the results between pair-wise groupings
of the 3 assessment methods were calculated.
Results: The 3 evaluation methods demonstrated qualitatively different results with respect to each
hospital’s level of disaster preparedness. The Spearman rank correlation coefficient between the
results of the on-site survey and the video analysis of teamwork was – 0.34; between the results of
the on-site survey and the structured drill evaluation tool, 0.15; and between the results of the video
analysis and the drill evaluation tool, 0.82.
Conclusion: The disparate results obtained from the 3 methods suggest that each measures distinct
aspects of disaster preparedness, and perhaps no single method adequately characterizes overall
hospital preparedness. [Ann Emerg Med. 2008;52:195-201.]
0196-0644/$-see front matter
Copyright © 2008 by the American College of Emergency Physicians.
doi:10.1016/j.annemergmed.2007.10.026
INTRODUCTION
A disaster may be defined as a natural or manmade event
that results in an imbalance between the supply and demand
for resources.1 Events of September 11, 2001, and the
devastation from Hurricanes Katrina and Rita have recently
Volume 52, NO. 3 : September 2008
highlighted the importance of hospital disaster preparedness
and response. Previous disasters have demonstrated
weaknesses in hospital disaster management, including
confusion over roles and responsibilities, poor
communication, lack of planning, suboptimal training, and a
Annals of Emergency Medicine 195
Assessing Hospital Disaster Preparedness
Editor’s Capsule Summary
What is already known on this topic
Extremely little is known on how to objectively and
accurately rate hospital disaster preparedness. Scales and
measurements have been developed but not extensively
validated; most evaluations are highly subjective and
subject to bias.
What question this study addressed
At 6 sites, 3 evaluation methods, an onsite predrill
survey, a real-time drill performance rating tool, and a
video teamwork analysis, were used and correlations
among evaluation methods examined.
What this study adds to our knowledge
The 3 methods produced disparate evaluations of
preparedness, suggesting that the instruments are flawed,
they are measuring different things, or both.
How this might change clinical practice
Better assessment tools for hospital disaster preparedness
need to be developed, perhaps beginning with the careful
definition of what aspects of preparedness are to be
measured.
lack of hospital integration into community disaster
planning.2
Despite The Joint Commission’s emphasis on emergency
preparedness for all hospitals, including requirements for having
a written disaster plan and participating in disaster drills, there is
currently no validated, standardized method for assessing
hospital disaster preparedness. This lack of validated assessment
methods may reflect the complex and multifaceted nature of
hospital preparedness.
To be prepared to care for an influx of victims, a hospital
must have adequate supplies, equipment, and space, as well as
the appropriate medical and nonmedical staff. Survey
instruments, either self-administered or conducted on site, may
be used to assess these resources. Although surveys and
questionnaires attempt to capture a hospital’s level of
preparedness through quantifying hospital beds, ventilators,
isolation capacity, morgue space, available modes of
communication, frequency of drills, and other aspects of disaster
preparedness,3-8 it is unclear whether they are reliable or valid
predictors of hospital performance during an actual disaster, or
even during a drill. In contrast to surveys, which assess hospital
resources and characteristics during a period of usual activity,
disaster drills make use of moulaged victims to gauge hospital
preparedness and assess staff interactions in a dynamic
environment in real time.
Although hospitals routinely conduct after-drill debriefing
sessions, during which participants discuss deficiencies
warranting improvement, there is no commonly used and
196 Annals of Emergency Medicine
Kaji, Langford & Lewis
validated method for evaluating hospital performance during
disaster drills. To address this gap, the Johns Hopkins
University Evidence-based Practice Center, with support from
the Agency for Healthcare Research and Quality (AHRQ),
developed a hospital disaster drill evaluation tool.9 The tool
includes separate modules for the incident command center,
triage area, decontamination zone, and treatment areas. In a
recent study, conducted in parallel with the study reported here,
we described the AHRQ evaluation tool’s internal and interrater
reliability.10 We found a high degree of internal reliability in the
instrument’s items but substantial variability in interrater
reliability.10
Recently, evidence has suggested that enhancing teamwork
among medical providers optimizes the provision of health care,
especially in a stressful setting, and some experts working in this
area have adopted the aviation model as a basis for designing
teamwork programs to reduce medical errors.11 In 1998,
researchers from MedTeams, a research corporation that focuses
on observing and rating team behaviors, set out to evaluate the
effectiveness of using aviation-based crew resource management
programs to teach teamwork behaviors in emergency
departments (EDs), conducting a prospective, multicenter,
controlled study.12 The MedTeams study, published in 2002,
demonstrated a statistically significant improvement in the
quality of team behaviors, as well as a reduction in the clinical
error rate, after completion of the Emergency Team
Coordination Course.12
Because effective teamwork and communication are essential
to achieving an organized disaster response, assessing teamwork
behavior may be a key element in a comprehensive evaluation of
hospital disaster response. Evaluating teamwork behaviors
involves the assessment of the overall interpersonal climate, the
ability of team members to plan and problem-solve, the degree
of reciprocity among team members in giving and receiving
information and assistance, the team’s ability to manage
changing levels of workload, and the ability of the team to
monitor and review its performance and improve its teamwork
processes.12 In addition to observing team members in real
time, MedTeams researchers routinely review videotaped
interactions among team members as a method of quantifying
teamwork behaviors.
The objective of our study was to determine the degree of
correlation between 3 measures of assessing hospital disaster
preparedness: an on-site survey, directly observed drill
performance, and video analysis of teamwork behaviors.
MATERIALS AND METHODS
Six 911-receiving hospitals, participating in the annual,
statewide disaster drill in November 2005, agreed to complete
the site survey and undergo the drill evaluation and video
analysis. The selection of the sample of hospitals and their
characteristics has been described previously.10 The drill
scenario included an explosion at a public venue, with multiple
victims. To preserve the anonymity of the hospitals, they are
designated numerically 1 through 6. Because all data were
Volume 52, NO. 3 : September 2008
Kaji, Langford & Lewis
deidentified and reported in aggregate, our study was verified as
exempt by the institutional review board of the Los Angeles
Biomedical Research Institute at Harbor–UCLA Medical
Center.
We used an on-site survey (included in Appendix E1,
available online at http://www.annemergmed.com), which
included 79 items focusing on areas previously identified as
standards or evidence of preparedness.1-3,13-28 The survey was a
modification of an instrument we used in a previous study.8
Compared with the original survey instrument, the number of
items was reduced from 117 to 79 by the study investigators to
eliminate items that had limited discriminatory capacity and to
reduce redundancy and workload. Survey items included a
description of the structure of the hospital disaster plan, modes
of intra- and interhospital communication, decontamination
capability and training, characteristics of drills, pharmaceutical
stockpiles, and each facility’s surge capacity (assessed by
monthly ED diversion status, number of available beds,
ventilators, negative pressure isolation rooms, etc). Because a
survey performed in 1994 demonstrated that hospitals were
better prepared when the medical directors of the ED
participated in community planning,27 we also assessed whether
each hospital participated in the local disaster planning
committee. Additional survey items examined mutual aid
agreements with other hospitals and long-term care facilities;
predisaster “preferred” agreements with medical vendors;
protocols for canceling elective surgeries and early inpatient
discharge; the ability to provide daycare for dependents of
hospital staff; the existence of syndromic surveillance systems;
ongoing training with local emergency medical services (EMS)
and fire agencies; communication with the public health
department; and protocols for instituting volunteer
credentialing systems, hospital lockdown, and managing mass
fatality incidents.
The survey was distributed by electronic mail, and between
June 2006 and June 2007, the disaster coordinators at each of
the 6 hospitals completed the survey. The on-site “inspection”
to verify the responses to the 79 item survey was performed by a
single observer (A.H.K.) between June 2006 and June 2007.
During the visit, necessary clarification of responses to the
survey items was obtained, followed by an examination of the
hospital disaster plan, the decontamination shower, the personal
protective equipment, communication systems (eg, walkietalkies and radio system), Geiger counters, the ED, the
laboratory, the pharmacy, and the designated site of the incident
command center.
The possible answers for 71 of the 79 survey items were
assigned a point value. Depending on perceived importance,
items were allocated zero to 1 point, zero to 3 points, or zero to
5 points, with a higher score indicating better preparedness. For
example, for the question, how many patients could you treat
for a nerve agent exposure? the answer “fewer than 10” would
be given a score of zero, the answer “10 to 20” would be given a
score of 1, “20 to 30” would be given a score of 2, and “greater
Volume 52, NO. 3 : September 2008
Assessing Hospital Disaster Preparedness
than 30” would be given a score of 3. There were also 8 of 79
questions to which no point value was assigned because the item
was not designed to discriminate between levels of preparedness.
For example, no point value was assigned to the question, have
you ever had to truly implement the incident command
structure? A summary score for overall preparedness was
calculated by summing each of the item scores. The maximum
possible score was 215 (see Appendix E1, available online at
http://www.annemergmed.com).
As described in our recent study and companion article
evaluating the reliability of the drill evaluation tool, 32 trained
medical student observers were deployed to the 6 participating
hospitals to evaluate drill performance using the AHRQ
instrument.9 Two hundred selected dichotomous drill
evaluation items were coded as better versus poorer preparedness
by the study investigators.10 An unweighted “raw performance”
score was calculated by summing these dichotomous indicators.
Although the drill evaluation instrument assesses multiple areas
of the hospital, including triage, decontamination, treatment,
and incident command, we chose to consider only those items
related to the performance of the incident command center
because it was the only drill evaluation module that was applied
at all 6 hospitals, as described in the companion article.10
Moreover, the MedTeams evaluation (see below) was based on
video analysis of the incident command center. We also believed
that a high level of performance in the incident command
center would be correlated with high levels of performance
elsewhere in the hospital.
There were 45 dichotomous items evaluating the incident
command center that could be dichotomously coded as
indicating better or worse preparedness. Examples of drill
evaluation items included whether the incident command
center had a defined boundary zone, the incident commander
took charge of the zone, the incident commander was easily
identifiable, the hospital disaster plan was accessible, and
whether the noise level in the incident command center
interfered with effective communication.
Because of the limited number of observers, 2 hospitals had 1
observer deployed to the incident command center, whereas 4
hospitals had 2 observers. When 2 observers were available, the
average of the 2 scores was calculated.
A professional video company was employed to film activities
at each of the hospitals on the day of the disaster drill. Although
various areas of the hospital were filmed, the predominant focus
was on the incident command center and capturing the
interactions among its members. The videos were edited,
transferred to DVDs, and sent to MedTeams, whose staff were
blinded to the drill and site survey results, for analysis and
scoring of teamwork behaviors.
To assess teamwork behaviors, MedTeams uses a team
dimension rating scale based on the 5 team dimensions of the
behaviorally anchored rating scales and an overall score, which is
a mean of the 5 team dimensions. The range of possible scores
for each of the team dimensions was 1 to 7. “Team dimension
Annals of Emergency Medicine 197
Assessing Hospital Disaster Preparedness
rating” is the term applied to the process of observing team
behavior and assigning ratings to each of the 5 behaviorally
anchored rating scale team dimensions.29
Each team dimension has specific criteria that are used for
scoring purposes. The first team dimension assesses how well the
team structure was constructed and maintained. For example,
the observer is asked to rate how efficiently the leader assembled
the team, assigned roles and responsibilities, communicated
with each of the team members, acknowledged contributions of
team members to team goals, demonstrated mutual respect in all
communications, held everyone accountable for team outcomes,
addressed professional concerns, and resolved conflicts
constructively.29
The second team dimension assesses planning and problemsolving capability. Observations include whether team members
were engaged in the planning and decisionmaking process,
whether protocols were established to develop a plan, whether
team members were alerted to potential biases and errors, and
how errors were avoided and corrected.29
The third team dimension evaluates team communications.
Observations include whether situational awareness updates
were provided, whether a common terminology was used,
whether the transfer of information was verified, and whether
decisions were communicated to team members.29
The fourth team dimension assesses the management of team
workload. The observer records whether there was a teamestablished plan to redistribute the workload, integrating
individual assessments of patient needs, overall caseload, and
updates from actions of team members.29
The final team dimension describes team improvement skills.
Recorded observations include whether there were shift reviews
of teamwork, whether teamwork considerations were included
in educational forums, and whether situational learning and
teaching were incorporated into such forums.29
Although behaviorally anchored rating scale descriptions
specify distinct clusters of teamwork behaviors, there is some
inevitable overlap across the 5 team dimensions. The
behaviorally anchored rating scale describes concrete and
specific behaviors for each team dimension and provides anchors
for the lowest, middle, and highest values (standards of
judgment). Additionally, the behaviorally anchored rating scale
delineates criteria for assigning a numeric value to the rater’s
judgment, and each of the 5 dimensions is rated on a numeric
scale of 1 to 7, in which 1 is very poor and 7 is deemed
superior.29
Primary Data Analysis
Data obtained from the on-site survey and drill evaluation
tool were recorded on data collection forms. All data were stored
in an Access database (Access 2003; Microsoft Corporation,
Redmond, WA). The database was translated into SAS format
using DBMS/Copy (DataFlux Corporation, Cary, NC). The
statistical analysis was performed using SAS, version 9.1 (SAS
Institute, Inc., Cary, NC), and Stata, version 9.2 (StataCorp,
College Station, TX).
198 Annals of Emergency Medicine
Kaji, Langford & Lewis
Table. Results of 3 methods of assessing hospital disaster
drill performance.
Hospital
Number*
On-site Survey
(1–215) (%)†
Modified AHRQ
Score in ICC
(1–45) (%)†
MedTeams ICC Score
(1–7) (%)†
1
2
3
4
5
6
155 (72)
155 (72)
186 (87)
159 (74)
166 (77)
152 (71)
31 (69)
19 (42)
27 (60)
34 (76)
24 (53)
26 (58)
5 (71)
4 (57)
4.8 (69)
5 (71)
4.2 (60)
5 (71)
ICC, Incident command center.
*Note that there was only 1 observer deployed to the ICC at hospitals 1 and 4,
whereas the remaining 4 hospitals had 2 observers simultaneously deployed to
the ICC, and the score represents the average of the 2 scores.


Purchase answer to see full
attachment

error: Content is protected !!