An essay from Peter Coffin tackling AI from the creative --- or at least, "artistic" --- side of things:
https://petercoffin.substack.com/p/plato-is-still-a-bitch
>If style becomes proprietary, then aesthetics become real estate. Not just the output, but the process—the brushstroke, the rhythm, the vibe. This is enclosure. It turns shared cultural language into fenced-off property. When you try to copyright a style, you’re not protecting yourself. You’re criminalizing everyone else.
That's certainly the goal with copyright (fenced-off property). Which makes the copyleft approach far more attractive to me: using the system to subvert the system, or at least, to describe a better iteration of it.
>“Plagiarism” becomes the alarm bell—not because something valuable was stolen, but because someone unapproved is now capable of producing something “valuable.” It threatens the scarcity model. It suggests that skill isn’t a divine gift or years of costly training—it’s a pattern, a reproducible technique, or worse: an aesthetic sensibility that was never truly ownable to begin with.
I mean, the training data that generative AI works with often *is* the result of years of costly training. Here, it feels like the creative/artistic world, traditionally protected with copyright law, is merely catching up to the industrial world, traditionally protected with patent law. The industrial world understands perfectly well that the research phase costs significantly more money than the reproduction phase.
In a perfect world, something resembling FRAND agreements would be in place, not for ownership or capital, but for the labourers responsible for the innovations. Perhaps the creative/artistic world could benefit from a similar arrangement. In any event, though I thoroughly dislike IP law in its present form, I suspect that even a socialist economy would have some flavour of it.
=========================
Coffin doesn't speak to the more foundational limitations of generative AI --- or at least of LLMs --- as documented in the Stochastic Parrots paper. Perhaps someone else could chime in from that perspective. It's the one I'm most concerned with as a returning nursing student, surrounded by peers that lean frequently on LLMs via ChatGPT or other models.
There is a similar jumping-off point, I suppose: the notion of who gets to "control" what "the truth" is. Prior to LLMs, fluency was widely utilized as a mechanism for discerning coherency. If it sounds authoritative, then it is. LLMs blow that approach to smithereens. Somewhat ironically, I suspect that a stronger reputation/citation/authorship management system, a more formalized "sign-off" mechanism, may ultimately be crucial in re-establishing some semblance of coherency, of shared reality.
(A short/micro story floating around the fediverse of a no-tech library containing only materials well before the advent of LLMs, carefully guarded to prevent any form of scraping other than by-hand scribes (kinda reminiscent of Dune, honestly) comes to mind; I'd prefer an approach that utilizes PKI to formalize signed chains (or taxonomies, or trees, or webs, or ontologies) of claims).
#socialism #communism #marxism #materialism #capitalism #ai #llm