ISSN: 2455-5282

Global Journal of Medical and Clinical Case Reports

Opinion       Open Access      Peer-Reviewed

AI and Epidemic Response: Between Intelligence and Illusion

Ioannis Tsagkarliotis*

PhD Candidate in Epidemics Logistics in UNIPI, Greece

Author and article information

*Corresponding author: Ioannis Tsagkarliotis, PhD Candidate in Epidemics Logistics in UNIPI, Greece, E-mail: [email protected]
Received: 21 November, 2025 | Accepted: 18 December, 2025 | Published: 19 December, 2025
Keywords: Artificial Intelligence; Epidemic intelligence; Public health infrastructure; Decision support systems; Pandemic preparedness; Algorithmic governance; Crisis management; Health surveillance

Cite this as

Tsagkarliotis I. AI and Epidemic Response: Between Intelligence and Illusion. Glob J Medical Clin Case Rep. 2025:12(11):231-235. Available from: 10.17352/gjmccr.000233

Copyright License

© 2025 Tsagkarliotis I. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Abstract

This paper examines how Artificial Intelligence (AI) evolved during the COVID-19 pandemic from a predictive tool into a potential partner in epidemic response. While AI was widely promoted as a guide for decision-making under uncertainty, real-world experience exposed a persistent gap between algorithmic capability and institutional use. Many predictive models struggled to adapt to rapidly changing human behavior and social conditions, demonstrating that epidemics are not only medical crises but also governance, coordination, and logistics challenges. This paper argues for a shift toward Responsible AI that prioritizes decision support over prediction and is embedded within public health operations. It introduces the concept of “expiration logic” to ensure that emergency AI systems remain accountable and do not outlast their legitimate use. Ultimately, AI’s value lies in strengthening human judgment and institutional resilience rather than replacing them.

Introduction

When the COVID-19 pandemic erupted, it did more than pressure health systems and economies — it distorted time, overwhelmed judgment, and forced decisions in conditions of radical uncertainty [1]. Policymakers, health professionals, and logistics operators found themselves navigating a sea of incomplete data, contradictory models, and political urgency. In this chaotic landscape, Artificial Intelligence was quickly framed as a technological compass [2]. But five years later, a more uncomfortable question remains: did AI truly help societies respond better to epidemics, or did it simply give us the illusion of control?

The contribution of this paper is to critically examine the role of AI in epidemic response not solely as a predictive tool, but as an infrastructural partner embedded in decision-making processes, integrating logistical, behavioral, and ethical dimensions of public health operations. This perspective aims to highlight both the potential and the limitations of AI, grounding insights in documented experiences from COVID-19 and other recent outbreaks [3].

The answer, inevitably, lies somewhere in between. AI did not “solve” the pandemic, nor should anyone expected it to. Its real value was subtler. It accelerated certain types of decision-making, revealed hidden patterns in large volumes of data, and helped systems react faster than human cognition alone would allow [4]. Yet, it also exposed how fragile our institutional capacity is to absorb intelligence — human or artificial.

From prediction obsession to decision support

Much of the early enthusiasm around AI focused on prediction: forecasting infection peaks, hospital admissions, or mortality trends [5]. Great effort went into building increasingly complex models, often competing over marginal improvements in accuracy. But epidemics are not physics experiments [6]. However, numerous studies have highlighted the limitations of these predictive models, which often overestimated or underestimated hospitalizations due to rapidly changing human behavior and policy interventions [7]. They are social systems filled with behavioral shifts, political interventions, and structural inequalities.

In reality, what decision-makers needed was not more precise predictions, but clearer decision support under uncertainty [8]. When to introduce restrictions, how to allocate limited vaccines, where to deploy mobile health units — these are not forecasting problems alone. They are optimization and governance problems [9]. AI proved most useful when it stopped trying to outperform epidemiologists and instead began supporting them — integrating epidemiological models with hospital capacity, mobility data, and supply-chain constraints. In such hybrid systems, AI became less of an oracle and more of a partner [10].

The gap between algorithms and action

However, a fundamental weakness remained: most AI systems never truly entered the decision room. Many of the most advanced epidemic AI models stayed confined to research papers, dashboards, and pilot platforms [11]. Meanwhile, real decisions were being made through political processes, institutional hierarchies, and crisis management protocols often disconnected from these tools. This gap is not technical — it is organizational [12].

For AI to be useful in epidemic response, it must be embedded where decisions actually happen: inside public health operations, hospital coordination centers, national logistics units, and crisis management teams [13]. Otherwise, it remains analytics without governance. At the same time, embedding AI in decision systems raises serious questions about responsibility. When an algorithm influences lockdown policies or vaccine prioritization, who is accountable? The developer? The public official? The institution that deployed it? Clarifying institutional responsibility frameworks and audit trails for AI-assisted decisions is critical to maintain accountability during crises [14].

Ethics is not an afterthought in a crisis

Another illusion exposed during COVID-19 was the idea that ethics can be temporarily suspended in the name of emergency [15]. Contact tracing apps, movement tracking, risk scoring, biometric surveillance — all were justified as necessary tools for public health. In some cases, they were. In others, they quietly reshaped the relationship between citizens and the state [16].

AI systems, when embedded in epidemic governance, carry political power. They determine who is considered “high risk,” which neighborhoods face restrictions, and which populations receive priority access to care. If these systems are built on biased data or opaque design choices, they risk amplifying social inequalities under the cover of technological neutrality [17].

Responsible AI in epidemic response must therefore be designed with an expiration logic — not only technically, but institutionally. Emergency technologies should come with built-in mechanisms for dismantling or repurposing once the crisis subsides [18].

Responsible AI in epidemic response must therefore be designed with an expiration logic — not only technically, but institutionally. Emergency technologies should come with built-in mechanisms for dismantling or repurposing once the crisis subsides. Practical measures include sunset clauses, mandatory post-crisis audits, and explicit policies for data deletion or system deactivation [19].

Epidemics as logistics and social systems

One of the greatest conceptual failures of the pandemic response was treating epidemics as purely medical events. They are not. They are logistics nightmares, behavioral experiments, and governance stress tests [9].

AI’s future role in epidemics must reflect this reality. It should not be restricted to case prediction or hospital forecasting [1]. Instead, it must extend into:

  • Medical supply chain optimization
  • Vaccine and resource distribution under uncertainty
  • Modeling of population behavior and compliance
  • Multi-sectoral decision simulations across health, transport, and economy

Only then can AI contribute to system resilience rather than isolated technological fixes [20]. This systems-level approach has been documented in studies where AI models integrated health, mobility, and supply data to improve epidemic mitigation outcomes [21].

From hype to infrastructure

AI for epidemic response should not be evaluated by its algorithmic sophistication, but by its ability to function as reliable infrastructure under pressure [22]. This requires a shift in thinking not AI as a futuristic add-on, but AI as a public health utility — like water, electricity, or communication networks. Something that works in the background, connects systems, supports human decision-makers, and remains accountable to society [23].

Because in the next epidemic, the real challenge will not be access to data or computing power. It will be whether our institutions have learned how to think alongside intelligent machines — without surrendering judgment, responsibility, or trust. Ultimately, integrating AI as an infrastructural partner rather than a predictive oracle may transform epidemic response from reactive crisis management to proactive resilience planning.

Conclusion

The experience of COVID-19 demonstrates that the central challenge of Artificial Intelligence in epidemic response is not computational capability, but institutional integration. Treating AI as an “oracle” capable of predicting epidemic trajectories created an illusion of control that often failed to translate into effective action. Even highly sophisticated models proved limited when confined to research environments and disconnected from the political, organizational, and logistical contexts in which real decisions are made.

This paper argues for a reorientation of AI toward decision support infrastructure—systems designed to operate within public health institutions and to coordinate medical, logistical, behavioral, and economic dimensions of epidemic response. Such an approach recognizes epidemics as socio-technical systems rather than purely medical events. However, embedding AI more deeply into governance structures also amplifies its ethical and political significance. Responsible AI in epidemic response therefore requires explicit accountability mechanisms, transparency, and institutional “expiration logic” to prevent the normalization of emergency surveillance and exceptional measures.

Ultimately, the effectiveness of AI in future outbreaks will depend less on predictive accuracy or computing power than on the capacity of institutions to collaborate with intelligent systems while preserving human judgment, democratic oversight, and public trust. Only under these conditions can AI move from illusion to meaningful intelligence in epidemic governance.

References

  1. McKee M, Rosenbacke R, Stuckler D. The power of artificial intelligence for managing pandemics: a primer for public health professionals. Int J Health Plann Manage. 2025;40(1):257–266. Available from: https://doi.org/10.1002/hpm.3864
  2. Qin H, Kong JD, Ding W, Ahluwalia R, Morr CE, Engin Z, et al. Towards trustworthy artificial intelligence for equitable global health. arXiv [Preprint]. 2023. Available from: https://arxiv.org/abs/2309.05088
  3. Bertsimas D. Optimizing vaccine allocation to combat COVID-19. medRxiv [Preprint]. 2020. Available from: https://www.medrxiv.org/content/10.1101/2020.11.17.20233213v1
  4. Kumar M. Leveraging AI to predict and manage infectious disease outbreaks: insights gained from COVID-19. J Artif Intell Cloud Comput. 2023;1. Available from: https://doi.org/10.47363/JAICC/2023(2)E234
  5. Doyen S, Dadario NB. Twelve plagues of AI in healthcare: a practical guide to current issues with using machine learning in a medical context. Front Digit Health. 2022;4:765406. Available from: https://doi.org/10.3389/fdgth.2022.765406
  6. Kraemer MUG, Tsui JL-H, Chang S, Lytras S, Khurana MP, Vanderslott S, et al. Artificial intelligence for modelling infectious disease epidemics. Nature. 2025;638(8051):623. Available from: https://doi.org/10.1038/s41586-024-08564-w
  7. van Heusden K, Stewart G, Otto SP, Dumont GA. Effective pandemic policy design through feedback does not need accurate predictions. PLOS Glob Public Health. 2023;3(2):e0000955. Available from: https://doi.org/10.1371/journal.pgph.0000955
  8. Few S, Stanton MCB, Roelich K. Decision making for transformative change: exploring model use, structural uncertainty and deep leverage points for change in decision making under deep uncertainty. Front Clim. 2023;5:1129378. Available from: https://eprints.whiterose.ac.uk/id/eprint/206729/1/fclim-05-1129378.pdf
  9. Hempel S, Burke RV, Hochman M, Thompson G, Brothers A, Shin JJ, et al. Resource allocation and pandemic response: an evidence synthesis to inform decision making. Rockville (MD): Agency for Healthcare Research and Quality; 2020. Available from: https://www.ncbi.nlm.nih.gov/books/NBK562921/
  10. Arık SÖ, Shor J, Sinha R, Yoon J, Ledsam JR, Le LT, et al. A prospective evaluation of AI-augmented epidemiology to forecast COVID-19 in the USA and Japan. NPJ Digit Med. 2021;4(1):146. Available from: https://doi.org/10.1038/s41746-021-00511-7
  11. MacIntyre CR, Chen X, Kunasekaran M, Quigley A, Lim S, Stone H, et al. Artificial intelligence in public health: the potential of epidemic early warning systems. J Int Med Res. 2023;51(3):03000605231159335. Available from: https://doi.org/10.1177/03000605231159335
  12. Sokol K, Fackler JC, Vogt JE. Artificial intelligence should genuinely support clinical reasoning and decision making to bridge the translational gap. NPJ Digit Med. 2025;8(1):345. Available from: https://doi.org/10.1038/s41746-025-01725-9
  13. Hattab G, Irrgang C, Körber N, Kühnert D, Ladewig K. The way forward to embrace artificial intelligence in public health. Am J Public Health. 2025;115(2):123. Available from: https://doi.org/10.2105/ajph.2024.307888
  14. Yuan Q, Chen T. Holding AI-based systems accountable in the public sector: a systematic review. Public Perform Manag Rev. 2025;48(6):1389–1415. Available from: https://discovery.fiu.edu/display/pub537777
  15. World Health Organization. Strategies to end seclusion and restraint. Geneva: World Health Organization; 2019. Available from: https://www.who.int/publications/i/item/9789241516754
  16. Matthan R. The privacy implications of using data technologies in a pandemic. J Indian Inst Sci. 2020;100(4):611–620. Available from: https://doi.org/10.1007/s41745-020-00198-x
  17. Tipton K, Leas BF, Flores E, Jepson C, Aysola J, Cohen JB, et al. Impact of healthcare algorithms on racial and ethnic disparities in health and healthcare. Rockville (MD): Agency for Healthcare Research and Quality; 2023. Available from: https://doi.org/10.23970/ahrqepccer268
  18. Lee CC, Comes T, Finn M, Mostafavi A. Roadmap towards responsible AI in crisis resilience management. arXiv [Preprint]. 2022. Available from: https://arxiv.org/abs/2207.09648
  19. Boudreaux B, DeNardo MA, Denton SW, Sánchez R, Feistel K, Dayalani H. Data privacy during pandemics: a scorecard approach for evaluating the privacy implications of COVID-19 mobile phone surveillance programs. Santa Monica (CA): RAND Corporation; 2020. Available from: https://www.rand.org/content/dam/rand/pubs/research_reports/RRA300/RRA365-1/RAND_RRA365-1.pdf
  20. Smyth C, Dennehy D, Wamba SF, Scott M, Harfouche A. Artificial intelligence and prescriptive analytics for supply chain resilience: a systematic literature review and research agenda. Int J Prod Res. 2024;62(23):8537–8556. Available from: https://ideas.repec.org/a/taf/tprsxx/v62y2024i23p8537-8561.html
  21. Ye X, Yiğitcanlar T, Goodchild M, Huang X, Li W, Shaw S, et al. Artificial intelligence in urban science: why does it matter? Ann GIS. 2025;31(2):181–195. Available from: https://doi.org/10.1080/19475683.2025.2469110
  22. Syrowatka A, Kuznetsova M, Alsubai A, Beckman AL, Bain P, Craig KJT, et al. Leveraging artificial intelligence for pandemic preparedness and response: a scoping review to identify key use cases. NPJ Digit Med. 2021;4(1):96. Available from: https://doi.org/10.1038/s41746-021-00459-8
  23. Morley J, Hine E, Roberts H, Sirbu R, Ashrafian H, Blease C, et al. Global health in the age of AI: charting a course for ethical implementation and societal benefit. SSRN Electron J. 2025. Available from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5217060
 

Help ?