Share

HomeOur work > Responsible AI

Working group on responsible AI

The mission of our working group is simple: we strive to foster and contribute to the responsible development, use and governance of human-centred AI systems, in congruence with the UN Sustainable Development Goals.

It is worth noting that GPAI’s working groups do not operate in silos, so we may collaborate with other GPAI groups from time to time. For instance, we may interface with the data governance working group when our respective projects share common dimensions. In light of the COVID-19 pandemic, we have also formed an ad hoc sub-group on AI and pandemic response to support the responsible development, use and governance of AI in this specific area. 

Responsible AI working group report |  Executive summary

Areas for future action in the responsible AI ecosystem (supporting report prepared for GPAI by the Future Society)

Our first project

In support of the Working Group’s mandate, we are launching a project that will lay the groundwork for GPAI’s future ambitions on responsible AI. The results from this first project were delivered at the GPAI’s first plenary held in December 2020. 

This is a first step in the development or integration of coordination mechanisms of the international community to facilitate cross-sectorial and international collaborations for AI for social good, in particular to contribute to achieving the UN Sustainable Development Goals. To do so, the initial project:

  • Catalogues existing key initiatives undertaken by various stakeholders to promote the responsible research and development of beneficial AI systems and applications, including: 
    • projects and frameworks to operationalize AI ethical principles and the application of AI for social good;
    • mechanisms and processes to identify and mitigate bias, discrimination, and inequities in AI systems;
    • tools, certifications, assessments, and audit mechanisms to evaluate AI systems for responsibility and trustworthiness, based on metrics such as safety, robustness, accountability, transparency, fairness, respect for human rights, and the promotion of equity.

  • Analyses promising initiatives that have great potential to contribute to the development and use of beneficial AI systems and applications that could benefit from international and cross-sectoral collaboration.

  • Recommends new initiatives and how they could, in practice, be implemented and contribute to promote the responsible development, use and governance of human-centered AI systems.

Medium-term deliverable

By 2022-2023, the working group will strive to foster the development or integration of coordination mechanisms for the international community in the area of AI for social good applications, to facilitate multistakeholder and international collaborations in this area. Such coordination mechanisms could include public consultation, when appropriate. The overall objective will be to bring together needs, expertise and funding sources. This deliverable may be associated with other potential future projects of the Working Group, for example, on the following topics:

  • Using AI systems to advance the UN Sustainable Development Goals, build public trust, increase citizen engagement, improve government service delivery, contribute to the promotion of human rights, and strengthen democratic processes, institutions, and outcomes;
     
  • Assessing and developing practical multistakeholder frameworks for specific applications for responsible AI;
     
  • Developing tools, certifications, assessments, and audit mechanismsthat could be used to evaluate AI systems for responsibility and trustworthiness based on metrics such as accountability, transparency, safety, robustness, fairness, respect for human rights, and the promotion of equity.

Our experts

The Working Group on Responsible AI consists of 32 experts from over 15 countries including India, Slovenia and Mexico. We have a wide range of expertise including philosophy, computer science, policy and ethics, resulting in varied viewpoints and robust discussions. As the group refined its mission and deliverables, we witnessed impressive cross-disciplinary collaboration among these experts. 

Group contact point: GPAI Montreal Centre of Expertise

Group members

  • Yoshua Bengio, Mila, Quebec Artificial Intelligence Institute (co-chair)
  • Raja Chatila, Sorbonne University (co-chair)
  • Carolina Aguerre, Center for Technology and Society (CETyS)
  • Genevieve Bell, Australian National University
  • Ivan Bratko, University of Ljubljana
  • Joanna Bryson, Hertie School
  • Partha Pratim Chakrabarti, Indian Institute of Technology Kharagpur
  • Jack Clark, OpenAI
  • Virginia Dignum, Umeå University
  • Dyan Gibbens, Trumbull Unmanned
  • Kate Hannah, Te Pūnaha Matatini, University of Auckland
  • Toshiya Jitsuzumi, Chuo University
  • Alistair Knott, University of Otago
  • Pushmeet Kohli, DeepMind
  • Marta Kwiatkowska, Oxford University
  • Christian Lemaître Léon, Metropolitan Autonomous University
  • Vincent C. Müller, Technical University of Eindhoven
  • Wanda Muñoz, SEHLAC Mexico
  • Alice H. Oh, Korea Advanced Institute of Science and Technology School of Computing
  • Luka Omladič, University of Ljubljana
  • Julie Owono, Internet Sans Frontières
  • Dino Pedreschi, University of Pisa
  • V K Rajah, Advisory Council on the Ethical Use of Artificial Intelligence and Data
  • Catherine Régis, University of Montréal
  • Francesca Rossi, IBM Research
  • David Sadek, Thales Group
  • Rajeev Sangal, International Institute of Information Technology Hyderabad
  • Matthias Spielkamp, Algorithm Watch
  • Osamu Sudo, Chuo University
  • Roger Taylor, Centre for Data Ethics and Innovation 


Observers

  • Amir Banifatemi, AI Commons
  • Vilas Dhar, The Patrick J. McGovern Foundation
  • Marc-Antoine Dilhac, ALGORA Lab
  • Adam Murray, OECD Network of Experts on AI
  • Karine Perset, OECD
  • Stuart Russel, UC Berkeley
  • Cédric Wachholz, Digital Innovation and Transformation Section, Communication and Information Sector, UNESCO