HomeOur work > Data governance

Working group on data governance

Our mandate as a group aligns closely with GPAI’s overall mission. Our working group aims to “collate evidence, shape research, undertake applied AI projects and provide expertise on data governance, to promote data for AI being collected, used, shared, archived and deleted in ways that are consistent with human rights, inclusion, diversity, innovation, economic growth, and societal benefit, while seeking to address the UN Sustainable Development Goals.”

Clearly, there are interactions between data governance and the remits of the other working groups – particularly on responsible AI and innovation and commercialisation – so we are working with them on areas of overlap. Our goal is that our work is also useful more widely, amongst those researching, thinking about and implementing data governance practices in AI.

Data governance working group report |  Executive summary

Framework paper for GPAI's work on data governance

 The role of data in AI (report prepared for GPAI by the Digital Curation Centre, Edinburgh University School of Informatics and Trilateral Research) 

Our first projects

In support of the Working Group’s mandate, we undertook two projects to lay the groundwork for GPAI’s future ambitions on data governance. These were delivered at GPAI’s first plenary in December 2020.

The first is an agreed data governance framework for the group. We’re aware that there have been many frameworks proposed for thinking about data and data governance, and we’re hoping that we can simply adopt one or more of them. We want to ensure we have an agreed scope, structure and vocabulary for the Working Group’s work. To do so, this piece of work:

  • defines what the working group understands data governance to cover, particularly in the context of the work of GPAI’s other working groups
  • defines an agreed set of terms and conceptual frameworks that the Working Group will use in further reports
  • describes the aspects of data governance that are most important for further research and evidence building by the working group.

This was produced by the working group itself, with Christiane Wendehorst, President of the European Law Institute and Professor of Law at the University of Vienna, leading this work with volunteers from the group.

The second project is an investigation on the role of data in AI. The aim of this project is to situate the importance of data in AI development and to identify both areas where more data would be useful – such as specific, open, datasets that could be worthy of national support or international collaboration – and where harms arise due to the collection of or access to data. In the process of doing so, it:

  • describes how data is used in the development of AI
  • describes the benefits and harms that arise from creating and having access to different types of data
  • identifies challenges to responsible AI development that arise from having or not having (access to) particular data

Openness, transparency, diversity and collaboration will be at the heart of these two projects and all of our work into the future. We will be publishing drafts and further updates on these two projects as an opportunity for consultation and wider input from the community before they are finalised.

Our experts

The working group itself consists of 30 experts from 17 countries with experience in technical, legal and institutional aspects of data governance. True to the overall ambition of GPAI, they combine cross-sectoral insights from the scientific community, industry, civil society and international organizations. We are fortunate to have a highly energetic, passionate and collaborative group bringing a wealth of perspectives beyond any single country. 

Group contact point: GPAI Montreal Centre of Expertise

Group members

  • Maja Bogataj Jančič, Intellectual Property Institute, Slovenia (co-chair)
  • Jeni Tennison, Open Data Institute, UK (co-chair)
  • Jeremy Achin, DataRobot
  • Alejandro Pisanty Baruch, National Autonomous University
  • Carlo Casonato, University of Trento
  • Paul Dalby, Australian Institute for Machine Learning
  • Matija Damjan, University of Ljubljana
  • Josef Drexl, Max Planck Institute
  • Teki Akuetteh Falconer, Africa Digital Rights Hub
  • Alison Gillwald, Research ICT Africa
  • Naoto Ikegai, Tokyo University
  • Takashi Kai, Hitachi
  • V. Kamakoti, IIT Madras
  • Te Taka Keegan, University of Waikato
  • Yeong Zee Kin, Infocomm Media Development Authority
  • Shameek Kundu, Truera
  • Neil Lawrence, University of Cambridge
  • Hiroshi Mano, Data Trading Alliance
  • Kim McGrail, University of British Columbia
  • Nicolas Miailhe, The Future Society
  • Bertrand Monthubert, Occitanie Data
  • Dewey Murdick, Center for Security and Emerging Technology, Georgetown University
  • P. J. Narayanan, International Institute of Technology, Hyderabad
  • Seongtak Oh, National Information Society Agency of South Korea
  • Carole Piovesan, INQ Data Law
  • Iris Plöger, Federation of German Industries (BDI)
  • Oreste Pollicino, University of Bocconi
  • Paola Villerreal, National Council for Science and Technology
  • Christiane Wendehorst, European Law Institute


  • Jaco Du Toit
  • Elettra Ronchi, OECD