EMA and FDA Forge Common AI Principles for Medicine Development

Apr 6, 2026, 2:22 AM
Image for article EMA and FDA Forge Common AI Principles for Medicine Development

Hover over text to view sources

The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have jointly established a set of ten principles aimed at guiding the responsible use of artificial intelligence (AI) throughout the medicines lifecycle. This landmark initiative responds to the rapid adoption of AI technologies among biopharma companies, which are reshaping practices from drug discovery to clinical testing and commercial rollout.
The principles provide broad guidance on AI applications in evidence generation and monitoring, covering all phases of medicine development, from early research and clinical trials to manufacturing and post-market safety monitoring. This collaborative effort between the EMA and FDA is designed to reduce regulatory divergence between the European Union and the United States, which has previously posed significant barriers to digital innovation in the pharmaceutical sector.
European Commissioner for Health and Animal Welfare, Olivér Várhelyi, emphasized the importance of these principles, stating they represent a renewed cooperation between the EU and the US in the field of novel medical technologies. He noted that this initiative showcases how both regions can work together to maintain their leadership in the global innovation race while ensuring high levels of patient safety.
At the core of these principles is the requirement that AI systems must be human-centric by design and aligned with ethical values. This means that AI must be developed using a risk-based approach, taking into account the specific context of its use, and adhering to current legal, ethical, technical, and regulatory standards.
Furthermore, the EMA and FDA mandate that AI systems incorporate multidisciplinary expertise and strict data governance, ensuring privacy and protection of sensitive information. Continuous monitoring and performance assessments, including human-AI interaction testing, are necessary to confirm that these systems remain effective and appropriate for their intended purposes.
The principles also emphasize the use of clear, accessible language to communicate AI limitations and the underlying data to users and patients. This is crucial in addressing the so-called "AI black box" phenomenon, where the processes behind AI-generated results are often opaque to users.
Industry associations have welcomed this regulatory framework, recognizing it as a critical step toward achieving global regulatory convergence in AI applications. The European Federation of Pharmaceutical Industries and Associations (EFPIA) expressed optimism, acknowledging that these principles will facilitate a more coherent environment for scaling AI tools and collaborating with regulators across regions.
As AI technologies continue to evolve, the EMA and FDA's principles will be key in promoting safe and responsible innovation in medicine development. These guiding principles serve not only as a foundation for future AI guidance but also as a means to foster enhanced international collaboration among regulators, technical standard organizations, and other stakeholders.
The establishment of these ten principles marks a significant milestone in regulatory efforts to harness the potential of AI in the pharmaceutical industry, ensuring that the benefits of AI are realized while safeguarding patient safety and ethical integrity. As the industry progresses, continuous dialogue and updates to these principles will be necessary to address emerging challenges and opportunities in the integration of AI within medicine development.
In summary, the joint effort by the EMA and FDA represents a pivotal moment in the regulatory landscape, setting a precedent for future collaboration and innovation in the use of AI in medicine development.

Related articles

Assessing ROI for AI Tools in Healthcare: A Strategic Approach

Health systems are increasingly focusing on measuring the return on investment (ROI) for AI tools beyond just financial metrics. By leveraging data-driven methodologies, organizations aim to assess both clinical outcomes and operational efficiencies, ensuring that AI enhances patient care while addressing financial pressures.

AI in Health Care: Balancing Benefits and Drawbacks

Artificial Intelligence (AI) is reshaping health care by enhancing efficiency, diagnosis accuracy, and treatment personalization. However, concerns about biases, data privacy, and the erosion of patient-provider relationships persist. A balanced approach is essential to realize AI's potential while safeguarding patient welfare.

New Zealand Expands AI Scribe Rollout to Emergency Mental Health

New Zealand is rolling out an AI scribe tool, Heidi, across all emergency departments, including mental health teams, to enhance patient care and reduce administrative burdens. This initiative aims to improve workflow efficiency by allowing clinicians to focus more on patient interactions, ultimately benefiting healthcare outcomes nationwide.

Study Reveals ChatGPT Health Under-Triage Medical Emergencies

A recent study published in Nature Medicine indicates that OpenAI's ChatGPT Health frequently under-triages medical emergencies, recommending less urgent care in over half of the cases evaluated. Researchers emphasize the need for further testing and caution in relying on AI for urgent medical decisions.

AI Tools Like DeepSeek Revolutionizing Youth Mental Health in China

Artificial intelligence tools, particularly DeepSeek, are transforming mental health care for Chinese youth by providing personalized support and therapy options. These innovations are reshaping how young people engage with mental health resources, though concerns about emotional impact and overreliance on technology persist.