Global Journal of Medical and Clinical Case Reports
PhD Candidate in Epidemics Logistics in UNIPI, Greece
Cite this as
Tsagkarliotis I. AI and Epidemic Response: Between Intelligence and Illusion. Glob J Medical Clin Case Rep. 2025:12(11):231-235. Available from: 10.17352/gjmccr.000233
Copyright License
© 2025 Tsagkarliotis I. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.This paper examines how Artificial Intelligence (AI) evolved during the COVID-19 pandemic from a predictive tool into a potential partner in epidemic response. While AI was widely promoted as a guide for decision-making under uncertainty, real-world experience exposed a persistent gap between algorithmic capability and institutional use. Many predictive models struggled to adapt to rapidly changing human behavior and social conditions, demonstrating that epidemics are not only medical crises but also governance, coordination, and logistics challenges. This paper argues for a shift toward Responsible AI that prioritizes decision support over prediction and is embedded within public health operations. It introduces the concept of “expiration logic” to ensure that emergency AI systems remain accountable and do not outlast their legitimate use. Ultimately, AI’s value lies in strengthening human judgment and institutional resilience rather than replacing them.
When the COVID-19 pandemic erupted, it did more than pressure health systems and economies — it distorted time, overwhelmed judgment, and forced decisions in conditions of radical uncertainty [1]. Policymakers, health professionals, and logistics operators found themselves navigating a sea of incomplete data, contradictory models, and political urgency. In this chaotic landscape, Artificial Intelligence was quickly framed as a technological compass [2]. But five years later, a more uncomfortable question remains: did AI truly help societies respond better to epidemics, or did it simply give us the illusion of control?
The contribution of this paper is to critically examine the role of AI in epidemic response not solely as a predictive tool, but as an infrastructural partner embedded in decision-making processes, integrating logistical, behavioral, and ethical dimensions of public health operations. This perspective aims to highlight both the potential and the limitations of AI, grounding insights in documented experiences from COVID-19 and other recent outbreaks [3].
The answer, inevitably, lies somewhere in between. AI did not “solve” the pandemic, nor should anyone expected it to. Its real value was subtler. It accelerated certain types of decision-making, revealed hidden patterns in large volumes of data, and helped systems react faster than human cognition alone would allow [4]. Yet, it also exposed how fragile our institutional capacity is to absorb intelligence — human or artificial.
Much of the early enthusiasm around AI focused on prediction: forecasting infection peaks, hospital admissions, or mortality trends [5]. Great effort went into building increasingly complex models, often competing over marginal improvements in accuracy. But epidemics are not physics experiments [6]. However, numerous studies have highlighted the limitations of these predictive models, which often overestimated or underestimated hospitalizations due to rapidly changing human behavior and policy interventions [7]. They are social systems filled with behavioral shifts, political interventions, and structural inequalities.
In reality, what decision-makers needed was not more precise predictions, but clearer decision support under uncertainty [8]. When to introduce restrictions, how to allocate limited vaccines, where to deploy mobile health units — these are not forecasting problems alone. They are optimization and governance problems [9]. AI proved most useful when it stopped trying to outperform epidemiologists and instead began supporting them — integrating epidemiological models with hospital capacity, mobility data, and supply-chain constraints. In such hybrid systems, AI became less of an oracle and more of a partner [10].
However, a fundamental weakness remained: most AI systems never truly entered the decision room. Many of the most advanced epidemic AI models stayed confined to research papers, dashboards, and pilot platforms [11]. Meanwhile, real decisions were being made through political processes, institutional hierarchies, and crisis management protocols often disconnected from these tools. This gap is not technical — it is organizational [12].
For AI to be useful in epidemic response, it must be embedded where decisions actually happen: inside public health operations, hospital coordination centers, national logistics units, and crisis management teams [13]. Otherwise, it remains analytics without governance. At the same time, embedding AI in decision systems raises serious questions about responsibility. When an algorithm influences lockdown policies or vaccine prioritization, who is accountable? The developer? The public official? The institution that deployed it? Clarifying institutional responsibility frameworks and audit trails for AI-assisted decisions is critical to maintain accountability during crises [14].
Another illusion exposed during COVID-19 was the idea that ethics can be temporarily suspended in the name of emergency [15]. Contact tracing apps, movement tracking, risk scoring, biometric surveillance — all were justified as necessary tools for public health. In some cases, they were. In others, they quietly reshaped the relationship between citizens and the state [16].
AI systems, when embedded in epidemic governance, carry political power. They determine who is considered “high risk,” which neighborhoods face restrictions, and which populations receive priority access to care. If these systems are built on biased data or opaque design choices, they risk amplifying social inequalities under the cover of technological neutrality [17].
Responsible AI in epidemic response must therefore be designed with an expiration logic — not only technically, but institutionally. Emergency technologies should come with built-in mechanisms for dismantling or repurposing once the crisis subsides [18].
Responsible AI in epidemic response must therefore be designed with an expiration logic — not only technically, but institutionally. Emergency technologies should come with built-in mechanisms for dismantling or repurposing once the crisis subsides. Practical measures include sunset clauses, mandatory post-crisis audits, and explicit policies for data deletion or system deactivation [19].
One of the greatest conceptual failures of the pandemic response was treating epidemics as purely medical events. They are not. They are logistics nightmares, behavioral experiments, and governance stress tests [9].
AI’s future role in epidemics must reflect this reality. It should not be restricted to case prediction or hospital forecasting [1]. Instead, it must extend into:
Only then can AI contribute to system resilience rather than isolated technological fixes [20]. This systems-level approach has been documented in studies where AI models integrated health, mobility, and supply data to improve epidemic mitigation outcomes [21].
AI for epidemic response should not be evaluated by its algorithmic sophistication, but by its ability to function as reliable infrastructure under pressure [22]. This requires a shift in thinking not AI as a futuristic add-on, but AI as a public health utility — like water, electricity, or communication networks. Something that works in the background, connects systems, supports human decision-makers, and remains accountable to society [23].
Because in the next epidemic, the real challenge will not be access to data or computing power. It will be whether our institutions have learned how to think alongside intelligent machines — without surrendering judgment, responsibility, or trust. Ultimately, integrating AI as an infrastructural partner rather than a predictive oracle may transform epidemic response from reactive crisis management to proactive resilience planning.
The experience of COVID-19 demonstrates that the central challenge of Artificial Intelligence in epidemic response is not computational capability, but institutional integration. Treating AI as an “oracle” capable of predicting epidemic trajectories created an illusion of control that often failed to translate into effective action. Even highly sophisticated models proved limited when confined to research environments and disconnected from the political, organizational, and logistical contexts in which real decisions are made.
This paper argues for a reorientation of AI toward decision support infrastructure—systems designed to operate within public health institutions and to coordinate medical, logistical, behavioral, and economic dimensions of epidemic response. Such an approach recognizes epidemics as socio-technical systems rather than purely medical events. However, embedding AI more deeply into governance structures also amplifies its ethical and political significance. Responsible AI in epidemic response therefore requires explicit accountability mechanisms, transparency, and institutional “expiration logic” to prevent the normalization of emergency surveillance and exceptional measures.
Ultimately, the effectiveness of AI in future outbreaks will depend less on predictive accuracy or computing power than on the capacity of institutions to collaborate with intelligent systems while preserving human judgment, democratic oversight, and public trust. Only under these conditions can AI move from illusion to meaningful intelligence in epidemic governance.

PTZ: We're glad you're here. Please click "create a new query" if you are a new visitor to our website and need further information from us.
If you are already a member of our network and need to keep track of any developments regarding a question you have already submitted, click "take me to my Query."