Responsibly unleashing artificial intelligence in pandemic preparedness

What if we could outsmart a pandemic virus even before it emerges? With the rapidly evolving artificial intelligence (AI) landscape, this hypothetical is becoming a public health reality. From predicting which viral family the next pandemic may spill over from, to forecasting how a virus might evolve, AI is reshaping the way the world can prepare for and respond to pandemic threats.
As these growing capabilities gather pace, we also recognise the essential need to ensure their use is responsible, ethical and secure. The need to strike this balance is being recognised increasingly across the world.
From the US to the EU to Africa CDC and beyond, governments and think tanks are rapidly exploring policies to promote responsible use of AI and define norms for how AI tools are trained, developed and accessed. The message is clear: reducing biosecurity vulnerabilities and promoting responsible AI development, far from being a fringe concern, is a global priority.
At CEPI, we’re already working to harness AI’s promise for good, pursuing and mobilising partnerships that can accelerate global progress towards the 100 Days Mission—a goal to produce pandemic-busting vaccines in response to a novel disease threat within 100 days. Whether that’s supporting platforms like UC Davis’ “SpillOver” to help prioritise research on virus families with pandemic potential, using AI to identify immune-stimulating targets for future vaccines against threats like Nipah and Lassa, or applying Rosetta Macromolecular Modelling to accelerate novel immunogen design, CEPI is investing in cutting-edge AI-driven tools to speed up research and development in support of the 100 Days Mission.
But just as AI can help reduce vaccine development timelines and strengthen global pandemic preparedness, it also brings new risks, particularly in biosecurity. These include the potential misuse of AI-powered design tools and information to engineer harmful pathogens that could be more deadly and harder to contain.
To maximise the benefits of new technologies while reducing risks to human health and society, we must work across civil society, industry, and government sectors to strengthen AI applications and safeguards. Clear and objective guidelines for harnessing these advances will enable broad and equitable global participation in responsible research, as well as in efforts to support the 100 Days Mission safely and securely. After all, speed without security risks undermining the very progress it aims to deliver.
CEPI and partners are leading the way in defining such approaches. A recent culmination of these efforts, working with global experts including the University of Washington and Rosetta Commons, was the Community Statement signed and supported by over 250 global experts from more than 30 countries. It outlines common international principles to guide the responsible development of AI for protein design, including applications for vaccine design.
To turn these principles into action and guide responsible global efforts, CEPI and partners convened AI developers and biosecurity experts in January 2025 to outline concrete recommendations for funders, scientists and policymakers to minimise risks of AI misuse in protein design. The summary report pushes for deeper analysis of the potential benefits of AI for protein design, safeguards the use of AI to design new biological materials, and explores mechanisms for scientists and policymakers to work together on reducing AI/biosecurity vulnerabilities.
Alongside these efforts, CEPI has brought together AI, biosecurity and public health experts to raise awareness of possible risks and define concrete approaches to strengthening responsible AI applications for pandemic preparedness.
For example, CEPI, the Brown University Pandemic Center, and Nuclear Threat Initiative, in partnership with Foreign Policy magazine, convened senior leaders in global health security and rising biosecurity leaders from the Global South at the 2025 Munich Security Conference. The meeting was a chance to identify opportunities for reducing AI misuse while preserving its beneficial applications for the 100 Days Mission.
The meeting resulted in a bold declaration and an Op-Ed, issued by the Global South leaders who participated in the event, calling on the international community to place biosecurity at the heart of the 100 Days Mission and accelerate responsible AI applications. Geographically representative perspectives, including from the Global South, and continuous engagement, such as this, help ensure any biosecurity guardrails are implementable, accessible, and appropriate for a wide range of resource settings.
Broad and responsible accessibility is paramount. As with everything CEPI does, any AI-driven tools and capabilities we support will have equitable access at the centre. So, whether a partner is based in Korea, Rwanda, or the United States, scientists are empowered to responsibly access and use cutting-edge AI-tools to design and rapidly produce medical countermeasures against future viral threats. And, crucially, to do so securely, responsibly and ethically.
The AI revolution is here, and its applications for pandemic preparedness and the 100 Days Mission are manifold. Building biosecurity principles into AI’s expansive and rapidly evolving capabilities now, rather than as an afterthought, can ensure the world continues to innovate at pace to prevent future pandemics, while also reducing biosecurity risks to human health.