Using AI algorithms to address multi-stakeholder public good problems (e.g. pandemic response, climate action) requires sourcing data from and sharing across communities, organisations and nations in a secure way while respecting privacy, sovereignty, access control, IP rights and preventing misuse. Privacy enhancing technologies such as differential privacy and federated learning, and more generally the emergent concept of “structured transparency” can help enable that. There have been recent efforts to bring together a view of the current ‘state of the art’ in these technologies, and the ways in which they can potentially support real-life use cases. However, actual deployment and adoption of such technologies still remains relatively limited, poorly understood by key stakeholders, and when used, ad-hoc and loosely integrated in nature.
The project partnered with Singapore’s Infocomm Media Development Authority (IMDA) to conduct a demonstration of the use case. Singapore’s Digital Trust Centre (DTC) acted as the delivery partner. With the demonstration project now complete, the IMDA/ DTC team has documented key lessons learnt from the project.