Skip Navigation

Course 716 - Safety Management System Evaluation

Safety guides and audits to make your job as a safety professional easier

Evaluating the SMS

Evaluation Meanings

Evaluation defined: Webster defines the term, evaluate, as "to judge the worth of." Evaluation is a systematic, objective process for determining the success of a policy or program. It addresses questions about whether and to what extent the program is achieving its goals and objectives. The primary attributes of most SMS evaluations include objectivity, standardization, systematic, and formal.

Evaluation has several distinguishing characteristics.

An evaluation:

  1. assesses the effectiveness of an ongoing program in achieving its objectives,
  2. relies on the standards of project design to distinguish a program's effects from those of other forces, and
  3. aims at program improvement through a modification of current operations.

Evaluations are usually carried out by an evaluation team such as members of the safety committee or other safety staff. Team members should assist in developing the evaluation design, developing data collection instruments, collecting data, analyzing data, and writing the report. The evaluation plan is a written document describing the overall approach or design that will be used to guide an evaluation.

An evaluation plan should include:

  • what will be done
  • how it will be done
  • who will do it
  • when it will be done
  • why the evaluation is being conducted.

Purpose of an Evaluation

System Evaluations generally have four basic purposes:

Evaluate the design: Examination of the written plans, policies, procedures, and other documents to determine how clearly they are written and if they contain the necessary information. For instance, during the SMS evaluation, an evaluator would examine the written hazard communications program to make sure it contained the required information.

Evaluate the process: Another primary consideration in an evaluation is to assess the quality of SMS activities. For example, an evaluator might observe trainers using the program and write a descriptive account of how employee respond and then provide feedback to instructors.

Evaluate results: It's important for an evaluation to study the immediate or direct results of the SMS and its programs on employees. For example, the evaluator may conduct a walk-around inspection to determine the safety status of tools, equipment, and materials in the workplace.

Evaluate impact: An effective evaluation looks beyond the immediate conditions and behaviors representing the results of policies, instruction, or services. It also identifies longer-term as well as unintended program effects. It may also examine what happens when several programs operate in unison. For example, an impact evaluation might examine whether a safety program's immediate positive effects on behavior were sustained over time.

Regardless of the primary focus of the evaluation, they all use data collected in a systematic manner. These data may be:

  • quantitative, such as counts of safe/unsafe behaviors, or
  • qualitative, such as descriptions of the effectiveness of an incentive and recognition program.

Successful evaluations often blend quantitative and qualitative data collection. The choice of which to use should be made with an understanding that there is usually more than one way to answer any given question.

The SMS and Placing Blame

This is important. Do not conduct a SMS evaluation to determine the inherent value of a person. We don't evaluate to find out who is mad, bad, evil, lazy, crazy, stupid, or otherwise flawed. Do not make value judgments that attack a person or group. A key principle to understand, here, is that if you attack people, they attack back.

If the purpose of an evaluation is to "fix the system," playing the "blame game" is not effective precisely because it does not achieve the desired effect. Actually, the evaluation may be counter-productive.

If we evaluate to place blame, we'll stop the process once blame has been determined. As a result, we'll never get past blame to evaluate the system. In an effective SMS evaluation, our objective is to discover the effectiveness of the system.

Our primary question about programs is, "Do they work, or don't they?"

If the purpose is to fix the blame, you are not going to ask this critical question. Why? Because...

When the purpose of a process has been achieved, the process stops!

Safety Committees Should Help Evaluate the SMS

The safety committee can help by evaluating the employer's accident and illness prevention program, and making written recommendations to improve the program where applicable. This best practice emphasizes the fact that a very important safety committee responsibility is to help the employer evaluate the SMS. The safety committee should also be able to write quality recommendations to improve the SMS.

Determine the Benchmark

To conduct an evaluation, we need to take the information gathered from the baseline survey and rate it against an established benchmark. A benchmark is a standard by which the system can be measured or judged, for instance, we might say XYZ's SMS is "benchmark of quality" in our industry. In the optional modules of this course (Modules 5-12); you will be introduced to the OSHA Safety and Health Program Assessment Worksheet which may be used as a benchmark. This audit evaluates the same 58 elements of a SMS also used by OSHA to evaluate companies participating in the Safety and Health Achievement Program (SHARP). You may also be interested in using other evaluation standards as benchmarks such as:

Let's take a look at a simple example how all it works.

Analysis Example

apple

There is a basket of apples on the counter. You see one apple has a bump on it! You have now identified a possible problem.

Analysis: What does it look like?

To better understand why the apple looks like it does, you decide to cut it up, take a look at the seeds, the core, the flesh and the skin. You gather the following facts about the apple:

  1. The core and seeds look just fine.
  2. The bump is "smooshy"
  3. There are many little discolored "tunnels" throughout the fleshy part.
  4. Flesh surrounding the tunnels appears rotted
  5. The apple tastes very good
  6. The skin of the apple is discolored in places

Evaluation: OK, how "good" is the apple?

Since you have gathered information, you are able to evaluate the quality of the apple based on facts. You determine the apple is flawed. Now that you know there is a real problem, you can then figure out what the cause is so the rest of the apples don't spoil. You have to conduct a cause analysis. You understand that everything you've identified so far represent only the observable, measurable effects of some cause.

Cause Analysis: OK, what's the cause?

The question, now, is, "what is the cause." There are two basic types of causes you identify in your analysis: surface and root (very appropriate in our apple example ;-).

applemaggot

Surface causes: It's obvious the damage is caused by a bug of some kind. Considering all the information gathered helps them search the internet and determine that an Apple Maggot has deposited eggs under the skin of the apple and fed on the flesh of the apple. They're quite happy about discovering the obvious surface cause, but why is the Apple Maggot causing a problem? It never has before! They've got to figure out the root cause.

Root cause: You know the maggot did its damage, but why? Asking "why" a number of times, will help you eventually determine the less obvious underlying contributing causes of the spoiled apple. During root cause analysis you can determine that:

  • the pesticide used on the apples was not effective against the Apple Maggot
  • the Apple Maggot, which is native to the eastern part of the country, has somehow migrated to the local area

With this information in hand, you will be able to develop strategies to overcome this infestation.

Evaluating the Safety Management System

Evaluating the Safety Management System (Flowchart)
Evaluating the Safety Management System
(Click to enlarge)

The negative effects of a flawed system are often due to inadequate resources, system design, and/or system performance. If one or more of these three system components are flawed, the effect will be flawed conditions and behaviors. Often, management must decide if a flawed condition or behavior is the result of a flaw in the system or a policy violation which may require disciplining the violator.

Management must determine if adequate resources were available, if the system design was adequate, and if the system performance was adequate. If any of the three system components were inadequate, then the system is at fault and no discipline should be administered. If all three of the system components were clearly adequate, then discipline may be necessary.

If discipline is used despite an inadequate system, employees will feel as though they are being blamed without cause. This can lead to resentment and low morale. It is important to only discipline if the system has been shown to be adequate.

The flowchart (right) can be used as a guide when evaluating the safety management system. If any of the questions can be answered with a "No," then the system is inadequate and must be corrected. It is possible for more than one system component to be inadequate, therefore each component should be evaluated and corrected as necessary.

Scenario

Are any of the system components inadequate?

Bob, a maintenance worker with the company for 10 years, received a serious electrical shock while working on a conveyor belt motor. When Bob was asked why he did not use the company's established lockout/tagout procedures, he replied, "I thought about it, but the procedures were not current since the new equipment had been installed last year." Bob also indicated most of the other maintenance workers usually skipped the lockout/tagout procedures because they are constantly being told to "hurry up" and get the job finished.

  1. Resources: Did Bob have adequate resources to do the job?
    • Yes. Bob did have the necessary resources to use the lockout/tagout procedures.
  2. System Design: Was the design of the lockout/tagout program adequate?
    • No. The procedures were not current. They had not been updated since the installation of new equipment.
  3. System Performance: Were program policies and procedures being performed adequately?
    • No. The policy to use lockout/tagout was not being used by other maintenance workers due to the procedural issues and the workers were not given the time necessary to follow proper safety procedures.

Last Words

I hope this information on cause analysis has been helpful. In the next module, we'll discuss a few analysis tools and techniques. Remember, all this information on analysis will help you make factual conclusions about the quality of your SMS. Time to answer the review questions. Good luck!

Instructions

Before beginning this quiz, we highly recommend you review the module material. This quiz is designed to allow you to self-check your comprehension of the module content, but only focuses on key concepts and ideas.

Read each question carefully. Select the best answer, even if more than one answer seems possible. When done, click on the "Get Quiz Answers" button. If you do not answer all the questions, you will receive an error message.

Good luck!

1. This process judges the worth or effectiveness of the SMS.

2. The purpose of an SMS evaluation is to _____.

3. When the purpose of a process has been achieved, _____.

4. This is a standard by which the system can be measured or judged.

5. All of the following are mentioned as benchmarks in the text, EXCEPT _____.


Have a great day!

Important! You will receive an "error" message unless all questions are answered.