When new technology takes the industry by storm, does your QA team have the “education, training, and experience” necessary to execute their required responsibilities? Even when outsourced, the contract giver needs to have the requisite skills to provide adequate oversight. These concepts are also summarized in the FDA publication “Q10 Pharmaceutical Quality System” regarding both in-house and outsourced activities. However, the concepts of data governance (DG) risks and artificial intelligence (AI) deployment are quickly outpacing the traditional background of quality professionals responsible for oversight over its deployment in an organization.
From the regulator’s vantage point, public statements and strategies have been issued to increase and modernize Agency capabilities. Further reading is available for the FDA and EMA (respectively) at:
- Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together (fda.gov)
- Artificial intelligence workplan to guide use of AI in medicines regulation | European Medicines Agency (europa.eu)
The power and potential of AI is undeniable; however, one commonly utilized element of an overall compliance strategy is to ensure that there is a human in the loop to provide oversight. Even if outsourced, it is still essential that this oversight has both the technical knowledge and the quality insight to provide the necessary guidance and guardrails. In the risk-adverse environment of regulated industry, stress will be put on organizations if the traditional QA roles do not invest in AI capability to ensure that oversight is appropriately executed, now and in the future. Depending on a company’s organizational structure, the validation function will also need to be considered, since they are often responsible for the functional testing of systems.
The EU Artificial Intelligence Act has already defined high risk systems as those that meet both of the following specifications:
(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonization legislation listed in Annex I;
(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonization legislation listed in Annex I.
If your quality organization is not familiar with how to apply these definitions at your site, you may already be behind in the race for qualified talent. Any mature organization realizes that the technology and the quality oversight need to work in unison. A recent publication by CB Insights Research (Pharma AI Readiness Index: Who’s best-positioned for the AI boom? CB Insights Research) has published the readiness score of the top 50 companies. Many of these companies have partnered with outsourced providers, which then applies new pressure on a company’s vendor qualification. It is believed by many industry insiders that the gap with quality professionals ranks lower than these aggregated readiness scores.
Is your organization investing in the future of quality professionals, or will you need to buy talent from the market in an ever-increasing stressed talent pool? Lachman Consultants can help your firm identify key areas of risk with our established methodologies, and we can help incorporate mitigations to ensure that the convergence of risk management, technology, and capabilities of quality oversight don’t leave your firm lagging behind.
Lachman Consultants can help your company to identify key skills and build the data centric capability in your Quality Assurance organization. Additionally, we can help deliver robust compliance strategies for your AI solutions for GMP use. Reach out to us at LCS@LachmanConsultants.com for a free consultation.