by Giuliana Miglierini
The rapidly evolving role of artificial intelligence (AI) and its possible application in the pharmaceutical field led the European Medicines Agency (EMA) to publish a draft Reflection paper on the use of AI along the entire lifecycle of both human and veterinary medicines.
The Reflection paper summarises the current view on the possible use of AI and machine leaning (ML) technologies and is open to public consultation until 31 December 2023. Comments, together with the outcomes of the joint HMA/EMA workshop to be held on 20-21 November 2023, will support its finalisation, as well as the possible development of new guidance. The initiative aims to identify the aspects of AI/ML use that would fall within the remit of EMA or of the single National Competent Authorities (NCAs) in charge of the dossiers’ assessment.
The document is part of the effort to improve the European Medicines Regulatory Network’s capability in data-driven regulation set forth by the joint HMA-EMA Big Data Steering Group (BDSG), and it has been developed in coordination with EMA’s CHMP and CVMP committees.
“The use of artificial intelligence is rapidly developing in society and as regulators we see more and more applications in the field of medicines. AI brings exciting opportunities to generate new insights and improve processes. To embrace them fully, we will need to be prepared for the regulatory challenges presented by this quickly evolving ecosystem” said Jesper Kjær, Director of the Data Analytics Centre at the Danish Medicines Agency and co-chair of the BDSG.
“With this paper, we are opening a dialogue with developers, academics, and other regulators, to discuss ways forward, ensuring that the full potential of these innovations can be realised for the benefit of patients’ and animal health” added Peter Arlett, EMA’s Head of Data Analytics and Methods, co-chair of the BDSG.
The main contents of the Reflection paper
The Reflection paper addresses the use of AI/ML with a dedicated chapter for each one of the steps, part of the overall lifecycle of a medicinal product. The interactions with regulators are then explored, as well as the different technical aspects to be taken into consideration. The final chapters discuss the governance of AI/ML applications, the integrity, and ethical aspects to be considered and how to address data protection.
The increasing use of AI technologies is based on the wide availability of big amounts of data produced and captured in electronic format on a routine basis. This data can be analysed by machine learning algorithms to inform the training of other systems able to analyse data and take actions with some degree of autonomy in order to achieve specific goals. These latter systems are the truly intended AI algorithms, those use is under discussion also in the field of regulatory decision making.
The Reflection paper recalls how the diffusion of these new technologies carries with it also some new risks, due to the “exceptionally great numbers of trainable parameters arranged in non-transparent model architectures”. Measures should thus be taken to ensure the safety of patients and the integrity of data, as well as to avoid the integration of bias into AI/ML applications.
The contents of the Reflection paper may also refer to combinations between a medicinal pro-duct and AI/ML-including medical device, as in such occurrence EMA is involved in the assessment of the characteristics of the device.
In general terms, existing guidelines, best practices, and recommendations relative to model in-formed drug development and biostatistics also apply to AI/ML; the adjacent methodology guidelines which may be relevant to this instance are listed in the document. The Reflection paper warns readers about the differences between the human and veterinary regulatory domains, suggesting that another reflection document specific for veterinary may be developed in future.
A risk-based approach
The usual risk-based approach typical of pharmaceutical development should be used to identify risks throughout the entire AI/ML tool lifecycle, including regulatory impact. According to the Reflection paper, the level of risk may depend also on the context of use and the degree of influence the AI technology exerts.
Developers should always seek early regulatory interaction and scientific advice, especially in cases where the AI/ML system may impact on the benefit-risk ratio of the medicinal product un-der development. To this instance, the EMA Innovation Task Force (ITF) is responsible for pro-viding early interaction on experimental technology, while scientific advice and qualification of novel methodologies in medicines development is provided by the Scientific Advice Working Parties (SAWP) of the CHMP for human medicines and of the CVMP for the veterinary ones.
As for all other steps of pharmaceutical lifecycle, the final responsibility for the suitability and use made of AI/ML algorithms, as well as their compliance to ethical, technical, scientific, and regulatory standards (GxP standards) and to current EMA scientific guidelines falls under the Marketing Authorisation Holders (MAHs) or applicants.
According to the Reflection paper, risks referred to in the discovery step may be limited, but they may impact the final evidence presented for regulatory review, thus principles for non-clinical development should be followed. This second step may include, for example, AI/ML modelling approaches according to the 3R principles. Regulators would expect to receive Standard Operating Procedures (SOPs) for AI/ML applications in preclinical studies; advisory documents on Application of GLP Principles to Computerised Systems and GLP Data Integrity should be also considered where appropriate. A pre-specified analysis plan would be needed for preclinical data potentially relevant for the assessment of the benefit-risk balance.
In the field of clinical trials, ICH E6 and VICH GL9 on human and veterinary GCPs should also be applied to the use of AI/ML. In particular, regulators expect the full model architecture and description of the data processing pipeline would be made fully available for comprehensive assessment should a model be generated for clinical trial purposes, as they would be considered parts of the clinical trial data or trial protocol dossier.
From precision medicine to pharmacovigilance
Particular considerations may be needed for applications of AI/ML in precision medicine, to individualise treatment according to the patient’s characteristics. AI/ML application supporting indication or posology decisions should be referenced in the Summary of product characteristics and their assessment falls under medicines regulation, representing a high-risk use both from the patient and regulatory perspectives. Close human supervision is also expected for AI applications used to draft or translate product information documents.
Many are the possible applications of AI/ML in the manufacturing of medicinal products, where they should follow the usual quality risk management principles (ICH Q8, Q9 and Q10) to guarantee safety, quality, and data integrity. New recommendations are also expected from EMA.
In the post-authorisation phase, AI/ML tools can effectively support efficacy and safety studies and post-marketing surveillance studies for veterinary medicines, as well as pharmacovigilance activities. Good pharmacovigilance practices should always be respected, and the used model should be validated, monitored and documented under the responsibility of the MAH. For conditional marketing authorisations, AI/ML applications should be discussed within a regulatory procedure unless details are agreed already at time of authorisation.