peter kleiweg 🇪🇺

Er is geen weg terug

peter kleiweg 🇪🇺 boosted:
2025-06-06

Oh man.....

peter kleiweg 🇪🇺 boosted:

The EU is currently congratulating itself because it managed to get a hashtag banned on TikTok in relatively little time.

LLMs encode the meanings of terms as vectors along many semantic dimensions in a semantic space ("latent space"). A concept, then, is a position in that space with a certain diameter — a kind of fuzziness or vagueness.

When I type something into ChatGPT or a recommender system, the input is broken down into tokens, and these tokens are mapped to such vectors.

“I want pizza” becomes:

["I", "want", "pizza", "."]

The tokens are then internally mapped to embeddings, like:

“cat” → [0.24, -1.12, 0.58, …]  
“dog” → [0.22, -1.09, 0.60, …]

That is, a list of numbers (often normalized between -1 and 1). But usually there are far more dimensions than shown here — an embedding typically has thousands of dimensions.

The latent space — the semantic space — is self-organizing. That happens during training. We don’t know what each dimension in the space represents.

The encoding has meaning. When look at the vectors for "man" and "woman" and for "king" and "queen", we can substract "man" from "woman" and "king" from "queen" and compare the difference vectors. They are almost, but not quite the same – because the difference between these words to us is almost, but not quite the same, in meaning.

LLMs use these embeddings and their internal model to “compute the next output token.”

Recommender systems use such embeddings to compare vectors and find things that are similar to the thing we already have.

So a recommender learns everything that’s relevant to a user, and a modern recommender represents the user through a collection of vectors:

"Interested in travel, digital policy, databases, bikes."

These are all concepts that may also be near other concepts in the space.

At the same time, the recommender classifies content in the same space, and can find content that lies close to one of the user’s sub-interests — or content that’s new, but still compatible.

A modern recommender separates a user’s interests into distinct areas and can decide what the user is interested in right now — meaning, which of the various user interests is currently active. Then, this time, it might only serve database content, and next time only bike content.

A modern recommender will also deliberately serve content that almost — but not quite — matches the user’s interests, to test how wide the bubble is around the center of that interest vector. So a bike session might also include urbanism, city development, and other nearby topics, and the recommender will watch carefully to see what kind of response that triggers — refining its recommendations based on that feedback.

A modern recommender will also know where the available content clusters are and prioritize content that is both relevant to the user and performs well or has current production capacity. In other words, where user interest and available content overlap well.

And a modern recommender will reevaluate every twenty minutes (“Pomodoro”, or “method shift” in educational theory) and attempt to shift the theme — to test whether another known interest can be reactivated.

That’s how TikTok works.

You can ban a hashtag on TikTok (“#skinnytok”).

But as long as related concepts are marketable and socially accepted — or even demanded — that won’t prevent anything.

As soon as you browse categories like “model,” “weight loss,” “fitness,” or “slim,” TikTok will slowly and systematically pull you into the same region, and the end result will be the same.

The actual language, the meaning, is encoded in the tokens of the latent space of the model, not in the words that are used (or prohibited).

And the content density in the models coordinate system will gently push things into certain clusters. If you feed the system with the right interests, you will always drift – relatively quickly even – into the same neighborhood and then learn their current slang to get there with a single word.

No matter what the word actually is.

A similar example, using GenAI instead of a recommender:

"Draw a superheroine, an Amazon warrior that can fly and deflect bullets, running over a battlefield in the first world war."

These 21 words do not say "Wonder Woman", they do not even go near comics, DC, or similar things.

Yet they draw a thousand-dimensional hyberbubble in latent space, the totality of knowledge known to ChatGPT, and the end result leaves just one choice – produce this blatant copyright violation.

I can trigger content with intent, not even going near the keywords that would be associated with it.

This is how jailbreaks work in LLMs, and that is also how you jailbreak Tiktok bans.

"Draw a superheroine, an Amazon warrior that can fly and deflect bullets, running over a battlefield in the first world war."

Did I say wonder woman? 

No, I did not. 

Using 21 words, I drew a thousand-dimensional hyperbubble around inside all the content known to ChatGPT and the only content to reproduce was a singular thing, this blatant copyright violation.
peter kleiweg 🇪🇺 boosted:
Comics Outta ContextComicContext@mstdn.social
2025-06-05
Four hands of the Wrecking Crew grasp a certain mystical object that is emanating, crackling light. Off panel the Wrecker says, “Grab onto the crowbar and concentrate – – CONCENTRATE!“
peter kleiweg 🇪🇺 boosted:
2025-06-05

I like how we took something computers were masters at doing, and somehow fucked it up.

ChatGPT 40 › 9.11 and 9.9 - which is bigger 9.11 is bigger than 9.9. subtract them To subtract 9.9 from 9.11, you perform the following calculation: 9.11 - 9.9 = 0.21 use python The result of subtracting 9.9 from 9.11 in Python is -0.79. This discrepancy is due to the way floating-point arithmetic is handled in Python, which can lead to small precision errors. The actual intended result should be 0.21. (›-)
peter kleiweg 🇪🇺 boosted:
2025-06-02
peter kleiweg 🇪🇺 boosted:
Natasha Jay (she/her) 🇪🇺Natasha_Jay@tech.lgbt
2025-06-01

The cat trees are blooming early this year.

A photo of four cats somehow comfortably perched high up in the small leafless branches of a tree
peter kleiweg 🇪🇺 boosted:
2025-06-01

This should be on TV every day, possibly multiple times per day…. #AI

peter kleiweg 🇪🇺 boosted:
2025-05-31

De meeste hitte van je gasfornuis bereikt het eten niet. Tweederde van de energie ontsnapt via de randen van de koekenpan. Twee afgestudeerden van de TU Delft bedachten een oplossing: de Effium pan.

Deze koekenpan kookt zo snel dat hij het gasverbruik met minimaal de helft vermindert. Het geheim achter deze pan is de hittevintechnologie, geïnspireerd op het ontwerp van raketmotoren. Het resultaat? Tot 35% sneller koken en tot 50% gasbesparing, gevalideerd door TNO.

duurzaam-ondernemen.nl/nederla

Foto: de Effium pan. Deze koekenpan kookt zo snel dat hij het gasverbruik met minimaal de helft vermindert.
peter kleiweg 🇪🇺pebbe
2025-05-31

@Eetschrijver @henkdeligt @EchteNachtraaf Gebaseerd op ervaring. Dagenlange stroomstoringen komen voor. Wanneer heb je voor het laatst iets gehoord over een storing van het gas?

peter kleiweg 🇪🇺pebbe
2025-05-31

@henkdeligt @Eetschrijver @EchteNachtraaf
Van de overheid moeten we ons voorbereiden op rampen. Zorg voor een noodvoorraad voor drie dagen, met onder andere houdbaar voedsel zoals pakken rijst en pasta. Maar wat heb je daaraan als de stroom uitvalt en je kookt elektrisch? Levering van gas is een stuk betrouwbaarder.

peter kleiweg 🇪🇺 boosted:
2025-05-30

Actrice Loretta Swit (87), die Hot Lips speelde in M*A*S*H, overleden

Loretta Swit, vooral bekend van haar rol in de televisieserie M*A*S*H, is op 87-jarige leeftijd overleden. Ze overleed vanmiddag in haar huis in New York.

nos.nl/l/2569370 #nieuws #nos

peter kleiweg 🇪🇺 boosted:
Comics Outta ContextComicContext@mstdn.social
2025-05-30
A poor kid in the medieval era is running with a chicken under his arm, and a huge smile on his ugly fucking face. He says, CHUCKLE! Now I’m in the CHICKEN BUSINESS!“
peter kleiweg 🇪🇺 boosted:
2025-05-30

Ah! there is nothing like staying at home for real comfort.

peter kleiweg 🇪🇺 boosted:
2025-05-30

Ouders van kinderen met uitputtingsziekte ME/CVS botsen met artsen over therapie

Ouders van kinderen met ME/CVS, een chronische ziekte die mensen uitput, komen in conflict met artsen als zij zich verzetten tegen een specifieke behandeling voor hun kind. Uit een inventarisatie van de NOS blijkt dat tal van wetenschappers zich afvragen of deze vorm van gedragstherapie niet meer kwaad dan goed doet.

nos.nl/l/2569354 #nieuws #nos

peter kleiweg 🇪🇺 boosted:
Kunst en Landschapkunst_landschap@mastodon.nl
2025-05-30

„Er is niks mis met deze huisjes en er wordt veel te gemakkelijk over de sociale, emotionele en cultuurhistorische waarden heen gestapt. Wat win je nou met sloop?”

tinyurl.com/mwssnhdh

peter kleiweg 🇪🇺 boosted:
Sordid Amok!SordidAmok
2025-05-30

The antidote to materialism isn't minimalism; it's maintenance. Keep things. Fix them. Mend them. Grow old with possessions you know well because you've cared for them.

peter kleiweg 🇪🇺 boosted:
Das Wissen | SWRDasWissen@ard.social
2025-05-30
peter kleiweg 🇪🇺 boosted:
2025-05-30

Therapie voor kinderen met uitputtingsziekte ME/CVS in twijfel getrokken

Kinderen met ME/CVS, een chronische ziekte die extreem uitput, krijgen in Nederland een behandeling aangeboden waarvan wordt betwijfeld of er nog wetenschappelijke basis voor is. Dit blijkt uit een inventarisatie door de NOS van internationale medische richtlijnen, wetenschappelijke studies en de standpunten van medische adviesorganisaties en patiëntenverenigingen in Nederland.

nos.nl/l/2569260 #nieuws #nos

peter kleiweg 🇪🇺 boosted:
2025-05-30

the need for the inclusion of the graphic is a sad reminder of the shockingly low literacy rate among geese

Sign reads “No Geese” and includes a graphic of a goose silhouette in a red circle with a line through it.

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst