Sino Biological - ProPure™ Endotoxin-Free Proteins
Lonza || Harvest 40 years of primary cell expertise

AI in Regulatory Affairs

Between Promise and Proof

Daniela Drago, Partner, NDA Partners

Maryam Daneshpour, Biopharma Expert

As artificial intelligence gains momentum in biopharma, its integration into regulatory affairs presents both transformative opportunities and critical challenges. This feature examines the current state of AI’s realistic applications, barriers to adoption, and inconsistencies across jurisdictions, and data privacy concerns. It also considers what a strategically augmented, AI-enabled regulatory function could look like in the near future.

Maryam: Is AI adoption in regulatory functions truly gaining ground across biopharma, or does it remain mostly confined to pilot initiatives and isolated innovation labs?

Daniela Drago: The adoption of AI within regulatory functions in the biopharma sector is clearly advancing beyond isolated pilot programs, especially in larger multinational pharmaceutical companies. For example, regulatory intelligence and pharmacovigilance are two key areas. Some organizations have publicly mentioned, for instance, at conferences, that they are already using AI for various aspects of regulatory intelligence activities, such as analyzing global regulatory updates, guidance documents, and competitor activities. There is an increasing volume and complexity of data that companies’ pharmacovigilance teams must manage, which demands improvements in the efficiency and accuracy of drug safety monitoring. So, it’s not surprising that there's a growing interest in automating aspects of adverse event case processing and signal detection. While many organizations, especially small ones, are still exploring options and might not have implemented any yet, the importance of AI is broadly acknowledged. The focus of the conversation is shifting from whether to adopt AI to how and when to invest and implement it.

Maryam: How aligned are major regulatory agencies (FDA, EMA, PMDA, etc.) in their understanding and acceptance of AI-enabled regulatory tools?

Daniela Drago: Major regulatory agencies seem to express a desire for at least some convergence in developing frameworks for the oversight of AI and Machine Learning (ML) in medical products. They certainly recognize the potential of AI to drive efficiencies, but also acknowledge the unique challenges and risks associated with it, for example, in AI-enabled medical products.  As of now, however, specific guidance and acceptance criteria for AI-enabled regulatory tools still vary, and companies that operate in different jurisdictions need to be aware of the lack of complete standardization. Groups such as the International Coalition of Medicines Regulatory Authorities (ICMRA) serve as an important forum to advance conversations about AI. ICMRA promotes international cooperation among medicines regulatory authorities, and has a clear goal of enhancing global dialogue, sharing information and learning, and promoting convergence to ensure a more efficient use of resources. Already a few years ago, ICMRA published a report with recommendations to help global medicines regulators address challenges with the use of AI. Where all regulators seem to agree already is that AI tools submitted or used in regulatory contexts must be appropriately validated, transparent to a degree that allows for assessment, and consistently reliable.

AI in Regulatory Affairs

Maryam: Among the AI-related guidelines or discussion papers published by regulatory bodies like the FDA, EMA, and WHO, which do you find the most comprehensive or realistic for biopharma applications, and why?

Daniela Drago: Several valuable documents have been published, but I believe that the EMA's "Draft reflection paper on the use of Artificial Intelligence in the medicinal product lifecycle" (EMA/CHMP/CVMP/SAWP/339394/2023) stands out. The document explicitly addresses AI applications across the entire medicinal product lifecycle, from discovery and development through to post-authorization. The biopharmaceutical industry welcomed the risk-based approach for developing, deploying, and monitoring AI and machine learning tools outlined in the reflection paper. The paper described that the level of risk varies based on key factors, such as - for example - data quality, use, and the technology's influence. So, the paper offers a balanced and pragmatic perspective that resonates well with the industry.

Maryam: How can global harmonization of AI use in regulatory affairs be realistically achieved when low- and middle-income countries often lack the digital infrastructure to support these systems?

Daniela Drago: Achieving global harmonization in the use of AI for regulatory affairs, especially considering the digital divide affecting low- and middle-income countries (LMICs), is likely a very long-term goal. Regulatory convergence would likely be attainable sooner. Some approaches that might be helpful include facilitating collaborative pilot programs in LMICs, with support and expertise from more digitally advanced regulatory agencies, which could provide valuable experience, demonstrate tangible benefits, and foster wider adoption. The initial focus could be on foundational principles, equitable support for infrastructure development, and the promotion of accessible, scalable solutions.

Maryam: What are the most common organizational or cultural barriers in the biotech and pharma industry preventing regulatory teams from embracing AI tools?

Daniela Drago: Several organizational and cultural factors can hinder AI´s adoption in regulatory affairs teams. One significant challenge is the high initial investment required for implementing AI solutions, which involves considerable upfront costs for technology, specialized or retrained talent, and process changes. This investment may make it challenging for some organizations to demonstrate a clear return on investment, especially when the benefits of AI are not immediately visible. Additionally, AI systems rely heavily on high-quality, integrated data; however, legacy IT systems and inconsistent data standards can be a significant barrier. Many professionals also face an AI literacy gap, and validating adaptive AI models is more complex and resource-intensive, posing further challenges. A clear vision for AI integration, strong leadership commitment, targeted investments with well-defined pilot projects to showcase value and build internal confidence, are likely key elements that organizations would need to overcome these barriers.

Maryam: What safeguards should companies implement to protect both proprietary and patient-level data when using AI tools, particularly cloud-based or third-party systems that may come under regulatory scrutiny?

Daniela Drago: Robust safeguards are essential to maintain confidentiality, integrity, and availability while meeting stringent regulatory expectations. Based on what I have read and heard from colleagues, this involves more than just establishing and enforcing clear policies and procedures for data access, use, storage, retention, de-identification/anonymization, and secure disposal. Key components include multi-layered security controls such as strong encryption for data both in transit and at rest, and robust identity and access management with multi-factor authentication. Additionally, network security measures like firewalls and intrusion detection/prevention systems are important. It’s important to conduct regular vulnerability assessments and penetration testing, as well as ongoing training. Individuals should be knowledgeable about data privacy principles, information security policies, responsible use of AI, and requirements for handling sensitive data.

Maryam: How should companies plan for the potential of AI failure or hallucination in high-stakes regulatory contexts? Are backup workflows or “human-in-the-loop” systems sufficient?

Daniela Drago: Preparing for potential AI system failures, particularly “hallucinations” (where AI generates factually incorrect or nonsensical outputs), requires embracing a proactive, multi-faceted strategy. Human-in-the-Loop systems are often essential. AI should be viewed as a tool to augment and assist human experts, rather than to replace them entirely. Qualified and well-trained personnel must review, verify, and, if necessary, override AI-generated outputs or decisions. Thorough validation against predefined acceptance criteria helps to minimize the likelihood of failures. To detect any degradation, drift, or unexpected behavior over time, companies need to ensure ongoing monitoring of the AI model's performance, and systems should be designed with clear fail-safe mechanisms. Suppose an AI tool malfunctions or produces outputs that are suspect or fall outside acceptable parameters. In that case, there must be a well-defined and validated process to revert to a manual method or an alternative (and validated) system. Until AI models achieve exceptionally high levels of reliability, robustness, and trustworthiness, a combination of diligent Human-in-the-Loop oversight, continuous monitoring, and well-rehearsed backup plans remains the most responsible and defensible approach.

Maryam: Do you think AI will eventually influence how regulators themselves make decisions, and if so, how can companies prepare for that shift?

Daniela Drago: It is probable that AI will increasingly influence how staff at regulatory agencies make decisions. This will likely be an evolutionary process, carefully managed to ensure continued protection of public health. Precursors to this are already visible in agencies' use of advanced analytical tools for submission management, data review, and compliance oversight. AI tools are already assisting reviewers in some agencies (e.g., Elsa at the FDA). It is important for life science companies to actively monitor how regulatory agencies are exploring, piloting, and adopting AI tools and methodologies. It is advisable to participate in public consultations, workshops, and relevant pilot programs when opportunities arise. The goal for regulators is and will continue to be to use AI to augment and enhance expert human judgment, not to replace it. Companies that generate high-quality, transparent data and can clearly articulate their scientific positions and risk management strategies will be best positioned to navigate this evolving landscape.

Maryam: In your view, what is the most common misconception about using AI in regulatory affairs today, and why do you think it persists?

Daniela Drago: The concept that AI will soon largely eliminate the need for human expertise and judgment. I find that this is a myth that likely persists due to several factors. Public and media discourse around AI often highlights its most advanced and sometimes speculative capabilities, which can overshadow the current practical limitations, especially in the life sciences. Individuals less familiar with regulatory affairs in the life sciences may not fully appreciate the critical thinking and nuanced judgment required. It is not a trivial task to interpret regulations, negotiate with health authorities, and make strategic decisions that support a business striking a balance between innovation and the protection of public health. These are tasks that current AI is not equipped to perform autonomously. There is a natural human inclination to seek powerful technological solutions for complex challenges. The idea of AI as a "magic bullet" is an attractive but likely unrealistic prospect. Some of the persistence of this misconception might also be fueled by anxieties within the workforce about AI potentially replacing human roles, leading to an overestimation of its current capabilities. The current and near-term reality is that AI can serve as a powerful augmentative technology in regulatory affairs. If used properly, it can free up regulatory professionals to concentrate on higher-value activities such as strategic thinking, complex problem-solving, and stakeholder engagement. The future is one of sophisticated human-machine collaboration, and, in my opinion, the value of experienced human judgment still remains irreplaceable.

Maryam: And finally, what does a high-functioning, AI-enabled regulatory affairs department look like five years from now—fully automated or strategically augmented?

Daniela Drago: I envision a high-functioning, AI-enabled regulatory affairs department to be strategically augmented by AI. The human element will remain central, but great tools will empower it. The objective of AI integration in regulatory affairs is not to diminish the human role but to elevate and empower it. A strategically augmented regulatory affairs department will be more agile, more insightful, more proactive, and ultimately, more effective in navigating the complex global regulatory environment, thereby facilitating timely patient access to high-quality, safe, and effective medical innovations.

--Issue 06--

Author Bio

Daniela Drago

Daniela Drago is a global life science regulatory expert with extensive diverse experience in product development. She has worked with companies from start-ups to Fortune 500 and was an Associate Professor at George Washington University's School of Medicine. A RAPS Board member and recipient of the TOPRA Award for Regulatory Excellence, she holds a Ph.D. in Chemistry from ETH, Zurich.

Maryam Daneshpour

Maryam Daneshpour holds a Ph.D. in biotechnology and an MBA, combining scientific and business training. She has worked across R&D, quality control, and business development in the pharmaceutical and biotech industries, contributing to diverse projects across multiple companies with a focus on biopharmaceuticals, biosimilars, and strategic collaboration.