The WHO has a draft July 2013 guidance document titled “Deviation Handling and Quality Risk Management” that addresses the concept that not all Quality Events are equivalent and provides consideration for the classification of events.  For example, an incident would be an event that does not:

“…affect a product attribute, manufacturing operational parameter or the product’s quality

or does not:

“….contradict or omit a requirement or instruction contemplated in any kind of approved written procedure or specification”

On the other hand, if there is a positive response to either of the above, then the event would be classified as a deviation, which would require:

a higher level of analysis and documentation, and are usually covered by a deviation handling procedure

With such deviation-handling procedures, companies would typically further categorize the deviation as minor, major, and critical with the understanding that, through the investigation process, the event may need to be “upgraded” in status.  An example would be, if through a trend assessment, it is noted that there is a high frequency of incidents associated with a certain method/processing step/activity/procedure that could justify upgrading the incident to a deviation to determine the root cause, impact, and CAPA for these repeated incidents.  All of this has the higher-level goal of ensuring that the level of investigation effort corresponds with the level of risk for that event.

How does this apply to a laboratory setting?  What would be classed as an incident?  It is common for firms to classify an event associated with exceeding of a method’s system suitability (SST) requirement as an incident (where no sample data was generated) and (conversely) an event where there is a direct impact to sample data, such as the generation of an OOS result, as a deviation.  This does make sense when you assess this against what the WHO guidance recommends as a means of categorizing.

However, there should be a level of caution with such an approach.  On the face of it, one may feel justified in recording this SST failure as an incident in the batch record or logbook and proceeding accordingly.  However, with an SST failure, the question asked by the analyst can become “should I just repeat the SST test?”  One should question – if I do repeat the SST test and I generate a passing result, am I justified in invalidating the previous SST result and which SST result would I consider reflective of the performance of my system?  In addition, if I am obtaining continued SST failures on a method/instrument, this may indicate issues with the performance of the method and/or instrument preventative maintenance/calibration program and, thus, the potential risk/impact to the reported result.

This is why it is recommended that even with a “laboratory incident,” such as an SST failure, there still should be a level of investigation to justify the subsequent steps.  Admittedly, this may not be a full-blown root-cause analysis as associated with an OOS investigation, but one may examine the nature of the failure and compare that to historical SST results with the goal of defining an appropriate action to remediate the cause for the SST failure (where one can document the effectiveness of the action once complete).  Ultimately, the goal is to justify the validity of any repeat testing based upon the effectiveness of the remediating action and, thus, the scientific basis for invalidating the initial failed SST result.  Taking such an approach should mitigate the risk of perception of “testing into compliance” that a laboratory may be accused of when addressing SST laboratory events.

If you have any questions relating to the above topic or how it applies in your firm, Lachman Consultants can help you!  Please contact LCS@lachmanconsultants.com for support with this critical undertaking.