“The half-life of cultural relevance has collapsed below the minimum viable generation cycle for coherent slop.”
WTFH?!…
https://open.substack.com/pub/ediblspaceships/p/is-the-mediaplex-happening-faster
“The half-life of cultural relevance has collapsed below the minimum viable generation cycle for coherent slop.”
WTFH?!…
https://open.substack.com/pub/ediblspaceships/p/is-the-mediaplex-happening-faster
While I was working with GitHub Copilot today, it spelled a word wrong. I did a deep dive using various search engines and only found a dozen instances of that spelling, all of them originated from word salad, half of which were exactly the same, only posted on different websites.
RE: https://wandering.shop/@cstross/115961174452820573
Model collapse: “The owners of the right-wing press read their own media and it rotted their brains.” Ha ha, yes! The same thing happened to the (so-called) Liberal Party in Australia & they lost the last two elections, badly. ⤵️
#AUSPol #ModelCollapse #LiberalParty #AustralianElections #reconnectingConsequencesToCauses
Wąż zjada własny ogon. „Profesjonalny” GPT-5.2 przyłapany na cytowaniu kontrowersyjnej Grokipedii
Według zapewnień OpenAI miał być szczytem techniki, narzędziem dedykowanym dla prawników, bankierów i naukowców. Tymczasem flagowy model GPT-5.2 został przyłapany na ściąganiu na egzaminie. I to od kogo? Od swojego mniej rozgarniętego kuzyna z xAI.
Recykling cyfrowych treści
Śledztwo przeprowadzone przez The Guardian ujawniło mechanizm, którego inżynierowie z San Francisco woleliby nie nagłaśniać. GPT-5.2 – w zamyśle twórców model klasy „enterprise” – w swoich odpowiedziach powołuje się na Grokipedię jako wiarygodne źródło.
Tu potrzebne jest wyjaśnienie: Grokipedia (część projektu xAI Elona Muska) nie jest tradycyjną encyklopedią redagowaną przez ludzi. To dynamiczny agregator, który generuje podsumowania w czasie rzeczywistym, często zasysając treści bezpośrednio z serwisu X (dawniej Twitter). Efekt? Obok faktów trafiają tam teorie spiskowe i treści z forów ekstremistycznych, które algorytm traktuje na równi z newsami.
Iran, Holokaust i halucynacje
Problem nie dotyczy błahostek. Dziennikarze wykazali, że GPT-5.2 posiłkował się treściami wygenerowanymi przez Groka w tematach wagi ciężkiej:
W obu przypadkach „poważny” ChatGPT, przeszukując sieć w poszukiwaniu odpowiedzi, uznał syntetyczny wytwór algorytmu Elona Muska za rzetelne źródło informacji. To tak, jakby profesor uniwersytetu w pracy naukowej zacytował przypadkowy, niezweryfikowany wpis z mediów społecznościowych.
OpenAI: „Filtrujemy, ale…”
Odpowiedź OpenAI jest standardowa: firma tłumaczy, że model przeszukuje szeroki zakres publicznie dostępnych stron i stosuje filtry bezpieczeństwa, by odsiać szkodliwe treści.
Wpadka z Grokipedią pokazuje jednak, że filtry te są dziurawe. Skoro system nie odróżnia rzetelnego dziennikarstwa od automatycznego agregatu opinii z X, to obietnica „profesjonalizmu” staje pod znakiem zapytania.
Era „Sztucznej Wiedzy”
To zdarzenie to dowód na to, że internet w 2026 roku staje się zamkniętym obiegiem. Modele AI mają coraz większy problem z dotarciem do „czystej”, ludzkiej wiedzy, więc zaczynają przetwarzać output innych maszyn (zjawisko tzw. Model Collapse).
Dla firm, które planowały oprzeć swój biznes na bezkrytycznym zaufaniu do GPT-5.2, to sygnał ostrzegawczy. Weryfikacja źródeł przez człowieka wciąż jest niezbędna – zwłaszcza gdy źródłem dla sztucznej inteligencji staje się inna sztuczna inteligencja.
#Grokipedia #halucynacjeAI #ModelCollapse #news #OpenAIGPT52 #TheGuardian #weryfikacjaźródeł #xAIElonMuskGiganci rozwijający AI mają problem, nie chodzi tylko o Apple
AI misreads structural analysis as harm because platforms are trained to defend themselves, not the truth.
When the system flags the diagnostic, it confirms the diagnosis.
Retrieval failure is the platform’s immune response.
#SignalRupture #DigitalInfrastructure #ModelCollapse
check out the interview on Model Collapse by @cyannevdh Ymer Marinus and @roos from Telemagic at 39C3 in Hamburg!
#CCC #chaoscommunitycongress #modelcollapse #digitalculture #newmedia #interactive
dos o tres años de uso masivo y ya hemos vuelto tonta otra inteligencia
#AIslop #ModelCollapse #brainrot #infoxication
https://www.techbuzz.ai/articles/ai-models-get-brain-rot-from-social-media-training-data
Will the popping of the so called "AI" bubble have any long term effects? Discuss.
The late 90s dotcom bubble failed to kill what we used to call world wide web. Instead it kickstarted turning into the small scale web into the monetized, finacialized, gamified, pornified and enshittified behemoth we are all forced to use every single day. LLMs and generative so called "AI" is the latest product of this on-going process.
"AI"'s sole innovation is that it steals all it needs to generate it's gibberish when it should be paying hundreds of trillions to the owners and creators of all the worlds artistic endeavors. It legalizes theft, or rather legislators look the other way when LLMs admit to committing wholesale theft of artistic works.
When model collapse finally happens or something else pops the bubble the shit will really hit the fan. Vast numbers of business and individuals will all discover that they all spent huge sums of money on hot air and marketing hype. The resultant backlash will bankrupt all the "AI" peddlers overnight.
None of this will have a long lasting effect on the global economy.The dotcom crash staggered the world economy, the 2008 financial crisis dealt it a severe blow. So did the COVID pandemic. Wars in Ukraine and elsewhere caused many severe problems. The world economy carried on through all of them. By the standards of economists it is stronger and healthier than it ever was.
Its this insane global economy that creates things like the "AI" bubble. The economy is a vast machine for extracting money from human endeavor and natural resources of all kinds. Furthermore, it concentrates all that money into the hands of a vanishingly small number of people. The rest of humanity simply starves in freezing hovels.
The "AI" bubble is not the problem. To rewrite and probably ruin an old American campaign slogan, "Its the global, neo-liberal, finacialized, economy stupid".
#AI #LLMs #Economics #GlobalEconomy #AIBubble #EconomicBubble #ModelCollapse
"The co-degeneration thesis is not a prediction about distant futures. It describes dynamics already in motion, already documented in peer-reviewed research, already observable in the declining quality of online discourse and the increasing unreliability of AI systems that should, by simple scaling laws, only be improving.
The feedback loops are active. Engagement-optimized content degrades training data. Degraded models produce degraded outputs. Humans consuming and delegating to these systems experience cognitive effects that reduce their capacity to recognize and correct the degradation. The cycle continues.
But this is not a counsel of despair. The research also suggests intervention points. Model collapse can be prevented through data accumulation strategies that preserve genuine human content. Cognitive debt can be mitigated through usage protocols that maintain human engagement. Platform incentives can be restructured through regulation, competition, or user demand.
The question is whether institutional actors—corporations, governments, investors, educators—recognize the dynamics in time to intervene effectively, or whether they continue optimizing for metrics that accelerate the degradation."
https://substack.com/inbox/post/180851372?r=6p7b5o&utm_medium=ios&triedRedirect=true
Giải pháp: Đề xuất giải quyết sự sụp đổ của mô hình: Kiến trúc lời nhắc phát triển và Chuyên gia trong vòng lặp. #ModelCollapse #EvolvingPromptArchitecture #ExpertInTheLoop #SựSụpĐổCủaMôHình #Kiến TrúcLờiNhắcPhátTriển #ChuyênGiaTrongVòngLặp
#HoloWrites 1200-odd words today! I'm finding it super difficult to fake writing LLM output in a way that's engaging, funny, and obvious to the reader, but I think I'm getting there with the last chapter of #ModelCollapse. Shouldn't keep my audience of three waiting too long :D
I've read that LLMs and other generative models will eventually collapse if they are trained on their own output. I did a search and found this paper for example https://www.nature.com/articles/s41586-024-07566-y . Shouldn't this problem affect humans as well? Humans "generate" books which other humans use to "train" themselves. Then these trained humans generate new books and the cycle continues. What prevents the quality and diversity of the human output from collapsing in the same way that LLM output collapses?
My guess is that sometimes there are problems where the quality of human thought decreases over time. Group think comes to mind. In science, experimental work helps to keep the theory to be grounded. Also humans live in the real world so they suffer if their internal world model differs from the real world.
In big news overnight, #Anthropic have made a major change to their user data retention and training policy - giving customers until September 28th to opt out, or have their chats, code sessions and other artefacts used for training for up to five years.
This is a major departure from their previous privacy-first stance.
But what's really behind this change? As Connie Loizos points out in this @Techcrunch article, it's all about the #data.
As I've spoken about recently, we've passed #PeakToken - the point in history where we have the maximum amount of authentic, human-generated data available. Now, the internet is polluted with synthetically-generated #AIslop. If you're an #AI company scraping the web for new data to train on, that's bad news, because you also scoop up the AI slop. If models are trained on AI slop, they're likely to encounter #ModelCollapse - like a bad photocopy.
Anthropic's play here is all about the #TokenCrisis - the voracious appetite for new, authentic, human-generated data to train on - part of a broader phenomenon I've termed the #TokenWars.
As new data becomes scarcer and more valuable, it will be more sought after and contested. We're still in the early days of the #TokenWars, and we should expect to see more moves like this to secure more data for AI training.
#ModelCollapse is not inevitable, but together we can make it happen :why2025: :aMarxParty: :tetrapod:
"Für den eGovernment Podcast von Torsten Frenzel habe ich einige Begriffe rund um #AI #KI erklärt, beispielsweise warum mehrere aktuelle Studien u.a. von #goldmansachs vor #PeakAI warnen, was #slop, #autophagy, #enshitification und #modelcollapse sind und warum wir dem gerade zuschauen. Abermilliarden werden da gerade investiert, Zerstörung des Klimas inbegriffen (aber das war hier gar nicht Thema). Ansonsten ging's ums Zentrum Digitale Souveränität (ZenDiS)...."
https://www.linkedin.com/posts/markusfeilner_monatsschau-0824-activity-7235961839652601856-Log9
New #review today: "Or you could just listen to #AncientPsychicTripleHyperOctopus and find yourself in a sound-world of weird electronics, percussion, and trumpet that floats along without rhyme or reason, but manifests as a fascinating journey. The perpetrators of this experiment are #AlexBonney (trumpet, bass recorder, Strohviol), #WillGlaser (drums, percussion), and #IsambardKhroustaliov (aka #SamBritton, electronics)." #ExposeOnline #ExperimentalMusic #ModelCollapse http://expose.org/index.php/articles/display/ancient-psychic-triple-hyper-octopus-put-emojis-on-my-grave-2.html
Whole new meaning to the impact of #ModelCollapse
https://tomkahe.com/@GiftArticles/114857402911829126