There are many ways in which Artificial Intelligence can potentially benefit society – but doing so requires a radically different data-sharing model. Public and political interest in AI technologies has dramatically increased over the last couple of years. Previously dismissed as a territory for nerds and computer freaks, few topics make the headlines quite as frequently these days.
Most of these headlines paint a rather bleak picture, making exaggerated claims about the loss of jobs in the wake of AI or predicting a digital Cold War. One of the most-watched TED talks on the topic is titled, “Artificial Intelligence: It will kill us.”
Despite the doomsday scenarios propagated by mainstream media, there are many ways in which AI can potentially benefit society. In fact, a recent McKinsey report suggests that AI can make a significant contribution to all of the UN Sustainable Development Goals. From combating climate change to feeding an ever-expanding global population, it can play a key role in solving some of the world’s most pressing challenges.
In order to do so, however, we still need to overcome several barriers. According to the same report, the most significant of these barriers is data. This seems surprising at first. Aren’t we producing more data than ever before? We are indeed, but this data does not necessarily end up in the right places.
To understand why this is such a problem, let us take a closer look at what AI actually is. Artificial Intelligence is generally defined as “the theory and development of computer systems which are able to perform tasks normally requiring Human Intelligence.”
The key enabler behind this is Machine Learning. As the name indicates, this refers to machines capable of “learning” something independently. Instead of being explicitly programmed, they rely on recognizing certain patterns in the data provided. Needless to say, the more data the machine receives, the more accurate it becomes.
Data is hence, a key component of machine learning, and by extension of AI. It is its very foundation, the “raw material” if you will. At the moment, most of this raw material is either in the hands of public institutions or private companies – neither of whom are incentivized to share it. Whereas the public sector is concerned with privacy issues, the private sector is concerned with profit motives.
Technological revolutions are rarely driven by one big discovery only. Instead, scientific breakthroughs are usually just the first step before entrepreneurs everywhere apply them to real-world problems and customer needs. As we are moving from an era of discovery of AI to an era of implementation, one thing becomes obvious: when it comes to data, we need to favor cooperation over isolation.
This is due to several reasons:
1. Sharing data accelerates advances in AI. The more data the algorithm can train with, the faster it learns – and the more accurate it becomes.
2. Sharing data promotes innovation. The more we allow people outside the silos of elite academic institutions to work with AI, the more innovative solutions to real-world problems we will see.
3. Sharing data helps to decrease algorithmic bias. By now, most of us have heard shocking examples of AI reinforcing negative stereotypes. Sharing data globally means exposure to more diverse data sets – and if done right, less bias.
4. Sharing data beyond institutional or national boundaries forces us to see AI for what it is – a global phenomenon that can and should not be contained within artificial boundaries.
We are currently at a historical crossroads of how to use and govern a technology that will – one way or another – drastically alter the way our social, political and economic systems function. Yet one thing is certain: the zero-sum mentality is not the answer.
Leveraging the full potential of AI for social good instead requires a radically different data-sharing model. Whether that means promoting public-private partnerships, establishing a global regulatory body or even declaring data as public good remains to be seen.
All of these ideas come with distinct challenges of their own. But they can serve as thought starters to how a new social contract between citizens, companies, and countries in the age of AI could look like.
Much of the current discourse is focused on the misuse of data. Yet we should also acknowledge the opportunity costs of missed use. Given the unprecedented global challenges ahead, we need to ask ourselves whether we can afford the latter in the name of national politics.