Share

HomeOur work > Responsible AI > AI and pandemic response

AI and pandemic response

In light of the current international context, the GPAI Task Force invited the working group on responsible AI to form an ad hoc AI and pandemic response subgroup. The subgroup brings together AI practitioners, healthcare experts, members and international organizations to support the responsible development and use of AI-enabled solutions to COVID-19 and other future pandemics. Through its work, it will promote that methods, algorithms, code and validated data are shared rapidly, openly, securely, and in a right and privacy-preserving way, to inform public health responses and help save lives.

The subgroup will focus on promoting cross-sectoral and cross-border collaboration in this area and on supporting engagement with the use of AI among the public and healthcare professionals in the global response to pandemics and public health challenges in a responsible manner.

AI and pandemic response working group report |  Executive summary

Responsible AI in pandemic response (supporting report prepared for GPAI by The Future Society)

Our mission and objectives

We are launching a project to catalogue, analyse, issue recommendations and suggest future projects on AI tools addressing the pandemic. This project has three components:

1. Catalogue existing AI tools developed and used in the context of the COVID-19 pandemic to accelerate research, detection, prevention, response and recovery. The catalogue will list initiatives from academia, governments, the private sector, civil society, and international organizations, among others.


2. Assess selected AI tools. AI tools of particular interest will be selected from the above catalogue for further assessment. The assessment will analyse how these tools implement notions of responsible research and development, and why they are beneficial applications of AI systems. The analysis will identify best practices, lessons learned and the main socio-economic, technical, and scientific challenges to implementing responsible AI principles.


3. Recommendations and future projects. Based on the analysis, make recommendations on best practices to overcome the challenges identified above, and suggest specific projects to fill gaps and overcome problems detected during the assessment.

Our experts

Group contact point: GPAI Montreal Centre of Expertise

Subgroup members

  • Alice Hae Yun Oh, Korea Advanced Institute of Science and Technology (co-chair)
  • Paul Suetens, KU Leuven (co-chair)
  • Anurag Agrawal, Council of Scientific and Industrial Research
  • Amrutur Bharadwaj, Indian Institute of Science
  • Nozha Boujemaa, Median Technologies
  • Dirk Brockmann, Humboldt University of Berlin
  • Howie Choset, Carnegie Mellon University
  • Enrico Coiera, Macquarie University
  • Marzyeh Ghassemi, University of Toronto
  • Hiroaki Kitano, Sony Computer Science Laboratories, Inc.
  • Seán Ó hÉigeartaigh, Centre for the Study of Existential Risk
  • Michael Justin O'Sullivan, University of Auckland
  • Michael Plank, University of Canterbury
  • Mario Poljak, University of Ljubljana
  • Daniele Pucci, Istituto Italiano di Tecnologia Research Labs Genova
  • Joanna Shields, BenevolentAI
  • Margarita Sordo-Sanchez, Brigham and Women's Hospital at Harvard Medical School
  • Leong Tze Yun, National University of Singapore
  • Gaël Varoquaux, INRIA
  • Bla┼ż Zupan, University of Ljubljana


Observers

  • Cyrus Hodes, AI Initiative
  • Alan Paic, OECD