Global Partnership on Artificial Intelligence Montreal Summit, 4 December 2020
Plenary opening speech: 'Why This Matters'
Baroness Joanna Shields
Chair of the Multistakeholder Experts Group Plenary and Co-Chair of the GPAI Steering Committee
The fragile nature of life and health has never been more apparent than in 2020. The novel coronavirus has tested us. It has torn at our economic and social bonds; accelerated and worsened inequalities; and increased our sense of isolation and alienation. Yet today, we are beginning to see the best of humanity shining through. Therapeutics have advanced; vaccines will be arriving soon. A return to a life that approximates normal is on the horizon.
Today, the partnership of human and artificial intelligence is empowering scientists and fueling new discoveries and the work of the Global Partnership for AI has never been more important.
Though technology propels us forward in so many ways, it is not a surrogate for society. We cannot abdicate our responsibilities to it. Technology needs to work for us - not the other way around.
AI is not some kind of superpower that will one day democratise benefits for all. Thirty years into this technology revolution and that still hasn’t happened. So, it's time to recognise the real economic and social forces at play and demand a future that serves us all.
We have the opportunity to learn from the mistakes we made during the first digital revolution and to put humanity at the heart of all we do. That’s what we missed last time around. I know because I had a front-row seat working for tech giants like Google, AOL and Facebook. Technology promised a more connected world with greater equality and social progress. Yet what we got was something of a free-for-all with tech giants living by the hubristic motto of ‘move fast and break things’ without any consideration of the damage being done. It was an exciting and even an intoxicating time. But no one stopped to calculate the unintended consequences. Nobody thought of the potential misuse by malign actors. No frameworks or blueprints were built to govern emerging technologies and platforms. And instead of a vibrantly connected world, we find ourselves more divided than ever. AI can either alleviate or exacerbate those divisions but the stakes are exponentially higher.
That’s why the work we are doing at the Global Partnership for AI truly matters. Doing it properly, getting on the front foot, embedding the frameworks, the standards and the principles as we go. And putting humanity at the centre of our thinking.
We must ensure that the AI we are building is magnanimous, not malevolent. That bias is kept in check and that the future serves us all equally. The diversity of the data we use to power our AI and machine learning must represent society as a whole so that no one is left behind. And we must hold true to the things that make us human – empathy, intuition, emotion, compassion.
I was struck by a comment made by Christina Colclough, the newly elected labour and trade union representative on the GPAI Expert Steering Group. She said, ‘We have a right to be human’ and I have not stopped thinking about that phrase since. Christina is correct, and what's more, we have a right for the technology we use to reflect the values of our society.
Human-augmented AI can achieve alchemy, If enabled with the right checks and balances - we can truly unleash the transformative potential of our innovations. AI may indeed boost our productivity and economic development, but nomonetary value will make up for the damage if progress comes at the expense of our humanity.
So I am still that tech utopian who started my journey in Silicon Valley in the late 1980s. A little older … but a lot wiser, but still convinced that the power of technology can change the world for the better. When serving as Minister for Internet Safety and Security in the UK, I recognised the potential for positive change. When technology platforms were making it easier for extremists to radicalise and sexual predators to exploit children online, I was forced to contend with the dark side of the technology products I had played a part in creating.
Determined to understand the language of abusers, we developed NLP and machine learning tools to identify, track and ultimately counter these threats on the open web. We could recognise patterns in language and behaviour online that would indicate nefarious activity and in response, I founded a multi-stakeholder organisation called WePROTECT to fight back and eradicate online child sexual abuse.
WeProtect, like GPAI is an example of the enormous power of governments and cross-industry and civil-society coordination. Today, 98 countries are part of the global alliance – each one bringing their ideas, unique experiences and energies to make the internet safe for our children. By bringing together diverse opinions, skills and experiences to this project, I am convinced GPAI too will deliver better outcomes than any one country or organisation can accomplish alone.
Which brings me to this moment. Today, unlike any time in human history, we can do good at scale, but we must be cautious as AI is a powerful force that we cannot afford to underestimate. To quote Henry Kissinger, “we must expect AI to make mistakes faster—and of greater magnitude—than humans do”. With that expectation comes a responsibility to put in place the frameworks to mitigate those risks now.
We are off to a great start. The OECD values of human rights, democracy, inclusion, diversity and sustainability, are our foundation. The diversity of our membership is our strength. If GPAI is to emerge as the leading, global voice on these issues, we must proactively spread awareness of our mission and galvanise support for our initiatives. And if we are to ensure that AI benefits all, not just a few, the composition of our membership must reflect that of society and each expert’s voice must be heard.
Though GPAI is a new organisation, we should be proud of what we have achieved so far. We have built powerful synergies. In the grips of the global pandemic, we mobilised rapidly, and through focus and determination, our working groups, Responsible AI, Pandemic Response, Data Governance, The Future of Work and Innovation and Commercialisation, all delivered phenomenal reports for this plenary. But we are only at the beginning of our journey. We must continue to build partnerships with more experts and governments, and harness the flourishing network of the global AI community to solve society’s biggest challenges like climate change, food insecurity, economic inequality and poor health and education outcomes.
We have to be bold in our quest, ask the hard questions and leverage the expertise of domain specialists. For instance, why can’t we build a world in which no disease goes untreated? Well, I believe that we can. But that world will not naturally emerge; we must actively create it. Today thousands of diseases have no effective treatment, and yet we spend tens of billions each year supporting a broken drug development system that leaves millions of patients without hope or a cure. This has to change. Patients deserve better. We must leverage AI to make the best use of the world’s vast and ever-expanding universe of biomedical information for the benefit of all.
And, most importantly, translate our work into tangible action because ultimately that’s what matters.
In the lobby of BenevolentAI, where I serve as CEO, a neon sign glows with the message – “because it matters”. It is a constant reminder that the work we do is not just a job; it is a mission that inspires and unites us in a shared purpose. And it defines and guides all that we do at GPAI too.
I believe that unconventional thinking combined with purposeful technology can remake and renew the world. Let’s unite behind concrete ideas and spark innovation and positive change, together.