Data Collection & Statistical Analysis

A critical element of your CRR plan will be the collection of data and information subsequent to program implementation. If you have conducted a thorough risk assessment, you should be able to compare previous statistics with the results found following implementation of your CRR program.

Statistical analysis is used to make assumptions about a population or data source, and can illustrate significant differences in averages or changes over time. Statistical analysis is a higher level of evaluation, and is more meaningful from a scientific perspective when it is statistically significant. For example, if a community had two fire fatalities in the first year, and then one fire fatality in the second year, the result would be a 50% reduction. Although, technically this is accurate, this is not statistically valid, and misleading.

Statistical analysis does not necessarily provide proof that the program is effective, but instead provides evidence that it may be working. Be cautious about making claims that may not be valid or significant.

Trend Analysis

In statistics, a trend analysis—or trending—typically refers to techniques for extracting an underlying pattern of behavior in a time series. Trending illustrates the fluctuation and changes in outcomes and outputs over time. It is an important outcome measure, as over time, it can indicate changes not due to random chance or normal variances.

Figure 9: Trending Chart Example

The preceding figure is an example of a chart illustrating a 10-year trend in kitchen fires. The chart shows that the frequency of kitchen fires started to decline in the years following the implementation of a CRR program targeted towards kitchen fires.

Although there may be specific changes seen in the impact evaluation, it does not necessarily mean there will be reductions in the outcome evaluation. A number of factors can affect outcomes, which is one reason to measure some outcomes on a per-capita basis.

The following illustration demonstrates the difference between changes and reductions. In 2014, the city had a population of 100,000 persons, with 10 fire fatalities. That represented a per capita rate of fire fatalities of 1/10,000 population. In 2015, the population had risen to 150,000 persons with 11 fire fatalities—or a per-capita rate of 1/13,326 persons.

Figure 10: Example of Change versus Reductions

Although the total number of fire fatalities had risen by 10%, the per capita rate of fire fatalities had actually decreased by nearly 27%.

The following illustration is another example of trending, but represents a comparison of local, state, and national averages (also represents and is linked to benchmarking; see “Benchmarking” below).

Figure 11: Example of Trending Comparison to State & National Averages

Benchmarking

Benchmarking is a process by which one organization (i.e., fire department) compares its results or performance against another organization. Usually, this focuses on a specific performance or evaluation metric, and compares one organization’s results to other’s best practices. The figure above is an example of benchmarking linked to trending.

Caution must be used when comparisons are made among agencies. You must ensure that the same parameters are be used between each organization. A good example is the measurement of cardiac arrest survival outcomes. Not all organizations use the same parameters when determining the percentage of patients that they successfully resuscitate.

The table on the following page demonstrates the difference in cardiac arrest survival between two communities. When viewing the survival rates between the two fire departments, it appears that Fire Department A has a significantly better rate of survival than Fire Department B.

Comparison of Survival Rates from Cardiac Arrest
Fire Department A Fire Department B
Percent Survived to Discharge: 28%
Measurement Components

  • Patients found in VF
  • Witnessed cardiac arrests
  • Bystander CPR performed
Percent Survived to Discharge: 12%
Measurement Components

  • All cardiac arrest patients

However, this is not an accurate comparison, as Fire Department A only measures the percentage patients who survived that were found in ventricular fibrillation (VF); had a witnessed event; and in whom bystander CPR was performed. In contrast, Fire Department B measured its results on all cardiac arrest patients.

The following illustration is an example of benchmarking fire incident-rates per capita among various fictitious cities throughout the United States.

Figure 12: Benchmark Example of Fire Incident-Rates per Capita

Performance Measures

Performance standards are utilized consistently throughout fire and emergency services. The National Fire Protection Association (NFPA), Commission on Fire Accreditation International (CFAI), Insurance Services Office (ISO), Commission on Accreditation of Ambulance Services (CAAS), and Governmental Accounting Standards Board (GASB) have each developed recommended performance standards related to fire protection, prevention, and other emergency services.

For example, NFPA Standard 1710 describes travel-time standards for the arrival of the first engine at a fire suppression incident.7 The CFAI defines recommendations for a community risk analysis, as well as for fire prevention and life safety programs. The ISO includes recommended distances of fire stations to commercial properties.

While most of these performance criteria represent consensus standards, they are usually not mandatory, but do provide guidance on potential goals that can be utilized by your department. However, your organization must determine reasonable performance goals based on the characteristics of your community.

The table below is an example of performance measures, which can overlap with evaluation types (process, impact, and outcome).

Figure 13: Performance Measure Example
Performance Measures & Outcomes Current Goal 2013 2014 2015 Trend
Incident Response
Total Calls for Service Decrease FPY 21,203 21,326 23,184 Improving
Urban Response ≤ 5 minutes 90% 57% 59% 61%
Suburban Response ≤ 6 minutes 90% 55% 54% 52%
Rural Response ≤ 8 minutes 90% 71% 69% 73%
False Alarms as percent of Total Calls < 5% 3.8% 3.4% 4.7%
Emergency Medical Services
Cardiac Arrest Survival (witnessed & VF) Increase FPY 13.6% 26.5% 57% Improving
ALS: Percent calls to nursing & other facilities Decrease FPY 14.3% 8.3% 7.9%
ALS Average On-Scene Time Decrease FPY 17:06 18:00 16:03 Worse

FPY: From previous year
Green: Goal met
Yellow: Missed goal, but close
Red: Missed goal totally, or bad trend

Process measures are related to outputs and inputs, but usually not outcomes. However, often there are times when there is overlap. The figure below illustrates an example of performance reporting and the types for each.

Figure 14: Process, Impact, & Outcome Performance Reporting Examples
Minimize the Effect of Fires 2013 2014 2015 Type
Percent of fires confined to room of origin 49% 50% 40% Outcome
Average dollar loss from structure fires $15,523 $12,585 $41,462 Outcome
Average total fire-response on-scene time 44:53 min 55:11 min 47:11 min Output
Fire Prevention
Determine cause of fires 78% 90% 89% Outcome
Conduct safety presentations 241 186 183 Output
Number of effective fire inspections 5,669 5,780 6,006 Output
Average percent knowledge gain 35% 37% 42% Impact

Finally, your evaluation does not require complex, sophisticated charts and graphs. Good data is imperative to monitoring your program accurately, and lends credibility to your organization. Depending on your capabilities and resources, it will be up to your organization to determine what measures to utilize. Consider asking for assistance from individuals outside your organization, who may have expertise (e.g., statisticians, industry consultants, etc.) that may help with your CRR program evaluation.