Within the draft ICH Q14 “Analytical Procedure Development” guidance document (here), Section 6 refers to the Analytical Procedure Control Strategy. This should be developed prior to and confirmed via Analytical Method Validation. A key component of the Analytical Procedure Control Strategy is System Suitability (SST), but the guidance document also makes reference to sample suitability:
“In addition to SST, sample suitability assessment may be required to ensure acceptable sample response……. In these cases, sample suitability is a prerequisite for the validity of the result along with a satisfactory outcome of the SST”
This aligns with the recently updated guidance from FDA “Investigation Out of Specification (OOS) Test Results for Pharmaceutical Production” (here) which states (under Section B) Cautions for the Averaging Results from the same sample preparation:
“….there may be cases where the test method specifies appropriate acceptance criteria for variability and a pre-defined number of replicates from the final diluted sample solution to arrive at a result. For example, an HPLC test method may specify both acceptance criteria for variability and that a single reportable result be determined by averaging the peak response from a number of consecutive, replicate injections from the same test vial. In these cases, and given the acceptance criteria for variability are met, the result of any individual replicate in and of itself should not cause the reportable result to be OOS.”
What is key is that, within the documented control strategy, it is clearly delineated what method attributes/parameters impact the SST and Sample Suitability criteria (along with the reported result). It is recommended that if a firm generates sample data (as part of sample suitability assessment) that is to be invalidated (as pre-defined criteria were exceeded) that this occurs via an investigation (per firm’s PQS), as this will serve to confirm that the exceedance of such criteria was a laboratory issue (and not reflective of material quality) and document the actions taken to support repeat testing. This will also enable trending of such incidents as a means of monitoring the effectiveness of the methods control strategy (as part of Lifecycle Management Control). Such as investigation has added significance if the control strategy exceedance is associated with an OOS data point.
Via trending, there should be distinction where SST criteria were exceeded vs sample suitability. If it is noted that, with such control strategy investigations for a particular method, the SST criteria would be met but the sample suitability would not, then this should be explained and traced back to the method development (robustness evaluation). Such a scenario may reflect the differing materials that are tested for SST vs Sample Suitability (i.e., pure API reference standard vs complex sample product matrix) and pose the question of the possibility of “enhancing” the SST regime.
Trending of control strategy failures is critical, as an increase in failures (from an established, justified baseline) may highlight the need to act as it relates to the method itself, but could also identify issues at the laboratory system/program level; for example, if the increase in such issues coincided with (as an example) change in laboratory instrument service provide, change in laboratory staff, etc.
If you have any questions relating to your site’s Analytical Method Lifecycle Management Program or would like an evaluation of your firm’s program, please contact us at LCS@LachmanConsultants.com.