I have to say that I’m a little bit concerned over a large number of (Layer of Protection Analysis) LOPA practitioners who have an almost religious fervor over the sanctity of the results of their LOPA studies.  The belief that “we have built a perfect system and the results of the system should never be questioned – they should just be implemented” is all too common.  The reality of the situation is that LOPA is a very simple tool for solving very simple problems.  This tool is far from Gold Standard for risk analysis.  In fact, practitioners who were involved in the development of the LOPA process at its inception, and those who have a lot of experience with LOPA will be the first to tell you that LOPA is brittle, and can be broken very easily (and in some cases intentionally) to allow wildly inappropriate results to be generated.

The question that needs to be answered is, “What do you do when LOPA attacks!”  What do you do when a LOPA is generating results and recommendations that defy logic, common sense, and benchmarks for commonly applied equipment?  The answer is you pass the LOPA scenario to an even more detailed analysis that is built to assess the components of a risk analysis problem in more detail so that the “real” risk can be understood, as opposed to the quick-and-dirty conservative estimate that the “rules” produce.  At Kenexis we call this next step in the risk assessment process a “Focused QRA”, and these projects are almost always worth their weight in gold.

In a focused QRA, a VERY expert analyst goes into whatever level of detail is required to get a true representation of the risk that allows a true answer to be achieved.  This includes detailed release models, fire and explosion models, dispersion models, and detailed occupancy, probability of ignition, and personnel vulnerability calculations to determine event consequence.  This also includes detailed reliability modeling, human error probability analysis, and fault tree and event tree analysis to fully quantify the likelihood of the multitude of outcomes to determine overall risk.  Even some level of financial analysis such as benefit-to-cost ratio determination of risk reduction measures is very beneficial especially for losses where the primary risk is due to environmental or commercial losses.  After this level of detailed analysis it is often determined that the LOPA results are unnecessarily conservative, resulting in often ridiculous projects.

As an example, consider a LOPA that was done on high level hazards in a tank farm that houses flammable materials.  The very simply LOPA process that employed stated that since filling a tank is a daily event, the frequency of failure to do it properly is about once per year.  Furthermore, the consequence was determined to be fatality because it is the “worst credible” think that could happen if an overfill occurred, and since operations and maintenance staff sometimes went into the area near the tanks they are considered “occupied”.  Furthermore, since there is no independent level measurement on these tanks, no credit was taken for operator intervention as an independent protection layers.  Ultimately, the LOPA process yielded a requirement for SIL 4 shutdowns on high level in an application where no shutdowns currently exist.

Now while this seems reasonable to a linear rule-based thinker, in reality it bears no resemblance to reality.  And as people from the designers of the Titanic to the Nobel Laureates at Long Term Capital Management have found out, if you start making decisions based on your “model” of reality, while ignoring reality itself, you are doomed to failure.  While the “model” of reality (i.e., the LOPA) indicated a problem – reality did not.  In this case, who do you believe?  The model or the reality?  The model predicted one overfill per year per tank, with a fatality being the result of the overfill.  At this facility with an inventory of 100 tanks, the result should have been 100 fatalities per year…  What was the reality?  In 50 years of operation, only one overfill which did not result in any safety consequence.  The “model” (i.e., LOPA) over-predicted the risk by several orders of magnitude.  LOPA is not the right tool to analyze this situation because it is an extremely complicated situation where multiple people interact with the process using multiple measurements – any of which will find large level anomalies to be glaringly obvious.  Detailed human factors engineering is required to understand the “real” situation, and LOPA is simply not capable of it.

The key take away from this episode is that LOPA is a quick-and-dirty estimate whose results should be looked at with contempt and suspicion, not an divine insight that should never be questioned so long as the “rules” are followed.  If your LOPA is giving you results that don’t seem correct, that’s because they probably aren’t correct.  The results probably need to be analyzed in more detailed.  In our experience, the results of 5-10% of LOPA scenarios (good ones, that strictly follow a well defined procedure and data set) that result in recommendations are flawed, and would result in different and more appropriate recommendations if analyzed in more detail.  In this case, a little bit more detailed analysis will pay for itself many times over…