The idea of the tragedy of the commons is well established; I think the term was first coined in the mid 20th century, but the idea is much older. Simply put, it’s the concept that if people are given free access to a resource, humans being humans, the resource will get overused and eventually destroyed entirely. The ‘commons’ In the phrase refers to shared grazing grounds, I believe, and the central example in the metaphor was the idea that if there is a common pasture, that’s what people will use first before they make use of their own private pasture, which means that it will become overgrazed.
Crucially, individual self restraint doesn’t really resolve the issue: even if some people show a bit of care and don’t overuse the common ground, then their use will simply be replaced by others – or by less scrupulous people who fill up the empty spaces themselves. In many ways, the metaphor is about greed – perhaps even a structural form of greed. In some ways, it’s become an argument for ownership of property, or, in less extreme measures, stewardship of property by government (although that often , in effect, has little difference to ownership).
This is not a legal blog, nor an agricultural one. The reason I have a passing interest in this metaphor is that it’s sometimes referenced in relation to the ‘digital commons’ – and often as an argument that our notions of ownership, copyright and perhaps even intellectual property are outmoded in such contexts. The idea of commons is directly linked to the Creative Commons and OER movements (both of which I support). The crucial difference, though, is that digital resources are often considered to be infinite. Tha is, they don’t run out. For example, if you print a book on paper, that costs resources – which have to be paid for, by someone. Hence, giving out free books for everyone doesn’t make any kind of economic sense (although I think there’s an argument for a moral case). But what about a PDF of your book? As we often think about it, there’s no cost in copying a PDF – and hence it can be freely distributed. Of course, I am ignoring the intellectual labor involved in writing or editing the book, but for the sake of the argument, let’s recognise that these resources are, in this sense, essentially infinite. Once the book is written, there’s no significant cost in making 10, 100 or even 10,000 copies.
So far, so good. I’m not really bringing anything new to the argument at this point… But how does the rise of generative artificial intelligence change that equation? One of the features of genAI tools – indeed, one of their selling points to some markets – is that they are able to generate ‘content’ rapidly and quickly. I’m not a huge fan of the term ‘content’ anyway, but putting that to one side, I know that there are now blogs, articles, news stories and probably very soon, video and audio – that has been entirely AI generated. There might be some human oversight, but the actual ‘writing’ is done by the tool. And, of course, the tool is just applying a model to generate this content.
Up until now, the model has been trained on huge data sets drawn from the internet. This has led to some interesting peculiarities, but crucially, most of this material has been written by people – by virtue of the fact that AI hasn’t existed or been usable in this form before. But what happens when newer versions of the model are trained upon data that includes AI generated data? Well, we know – the model collapses on itself, and eventually starts to generate gibberish. That has all kinds of interesting philosophical considerations, but I’m interested in the sociological ones.
Because we’re all going to continue to use AI, I imagine. Well, some of us might not, and they might have the moral courage to refuse, and fair play to them. But enough of us will continue to use it – even if we know that we’re poisoning the well by doing so…