#semantics

Kerr Avonsen (she/her)kerravonsen@mastodon.au
2025-07-10

TIL!

logomachy (noun):
argument about words or the meaning of words

#ArguingSemantics #Semantics

2025-07-09

Call for Short Papers (Posters): SemDial 2025 – The 29th Workshop on the Semantics and Pragmatics of Dialogue (Bialogue), September 3–5, 2025

Submit your short paper/poster to #SemDial2025 and come to Bielefeld! Submission deadline is July 18th, 23:59 UTC -12h (“Anywhere on Earth”).

For more information see semdial2025.github.io/

#linguistics #cfp #semantics #pragmatics #dialogue

Rowan Brad QuniQNFO@mstdn.science
2025-07-01

Just as #math can "prove" geocentric planetary epicycles, the crux is #semantics / #linguistics as we must also rely on words, accepted definitions, and #language too. When "matter," "energy," #darkmatter, and "antimatter" are interchangable or inconsistently defined the jumble of #physics jargon means... absolutely nothing, with or without math to selectively confirm misguided biases.

2025-06-19

Revisiting poststructuralist approaches to language in order to understand how we live and work with LLMs: the Pikachu Capybara effect

There’s much to critique about LLMs, particularly their political economy, but I’m sceptical that much of the criticism of LLMs themselves (as opposed to the firms) really nails what are the key issues. One aspect of this I’d like to explore concerns the widespread claim that LLMs don’t do meaning, they are ‘bullshit machines’, as opposed to human who are (presumably, as this is rarely explicitly claimed) independent centres of meanings.

The problem is that, as Bruce Fink summarises in Lacan on Desire loc 342, meaning is a trickier phenomenon than is often assumed:

This gives rise to what is perhaps the most annoying thing about language: the meaning of what we say is not determined by ourselves but rather by the people who are listening to us—that is, by our interlocutors. We intend to say one thing, and our interlocutors hear something else. Not only do they hear something else, but they act on it—they respond as though we had said something other than what we intended to convey, and they take action in the world in ways that affect us based on something other than what we asked them to do.

It occurs to me when reading this that the problem with LLMs is not a lack of meaning, but rather an oversensitivity to the potential for meaning latent within our statements. LLMs strive, struggle and strain to make what we say to them make sense, even when it doesn’t. For example I asked Claude Opus 4:

Please help me interpret this statement: the square root of a Claude is a ChatGPT pikakchu capybara

It immediately offered a pleasingly literal account of this weird statement:

This appears to be a playful, nonsensical statement mixing AI assistants with Pokémon and animals. Let me break down the references:

  • Claude – That’s me, an AI assistant made by Anthropic
  • ChatGPT – Another AI assistant, made by OpenAI
  • Pikachu – The famous electric mouse Pokémon from the Pokémon franchise
  • Capybara – The world’s largest rodent, native to South America, known for being remarkably chill

The mathematical structure (“square root of X is Y”) is being used humorously here, but it doesn’t have any actual mathematical meaning since these are all names/entities rather than numbers.

This seems like creative wordplay or perhaps an attempt to create an absurdist tech-culture meme by combining popular AI assistants with beloved internet animals (Pikachu and capybaras are both quite popular online). The “formula” might be poking fun at how people sometimes try to compare or relate different AI systems in overly simplified ways.

Is there a specific context where you encountered this? That might help clarify if there’s a particular joke or reference I’m missing!

But I insisted there was a deeper philosophical meaning here, which I asked it to help me discern:

You’re right to look deeper. Let me explore some philosophical interpretations of this curious equation:

On Reduction and Essence: The “square root” operation traditionally reduces complexity – it finds the fundamental basis of a number. Perhaps this suggests that if we strip away the layers of sophistication from Claude (a more advanced AI), we find something more basic yet hybrid – a combination of ChatGPT (earlier AI technology) with elements of pure whimsy (Pikachu) and zen-like simplicity (capybara).

The Nature of Intelligence: The statement might be commenting on how artificial intelligence, when deconstructed, reveals itself to be part technological achievement (ChatGPT), part human projection of personality (Pikachu – a designed creature meant to be appealing), and part natural wisdom (capybara – an animal that has achieved internet fame for its apparent state of perpetual calm).

Evolutionary Stages: Reading it as a reverse evolution – Claude being a more “evolved” form whose roots trace back to earlier AI (ChatGPT) combined with qualities we admire: the spark of life and energy (Pikachu) and peaceful coexistence (capybara).

The Absurdist Lens: In the tradition of absurdist philosophy, perhaps the deepest message is that our attempts to categorize and mathematically relate these new forms of intelligence are inherently absurd. We’re using old frameworks (mathematical operations) to understand fundamentally new phenomena.

What philosophical angle resonates most with you? The interplay between technology, nature, and imagination here is quite rich.

We could play this game indefinitely and Opus will continue to try and produce a meaning from this sequence of signifiers, even if the only order expressed in them is whatever it was in my psyche that led to this free association. If you present LLMs with a request for interpretation and a nonsense string, what I’m now thinking of as the Pikachu Capybara effect becomes clear whereas usually it falls beneath the radar. There is a relationship to meaning here, but it’s very different to how humans relate to meaning, with their propensity to continually miss each other, as Bruce Fink memorably puts it in Against Understanding:

From Bruce Fink’s Against Understanding vol 2:

In Lacan’s view, no such intersubjectivity is possible because there is always a fundamental hiatus or disjunction—a misunderstanding or missed understanding—between people, because first of all, we tend to misunderstand ourselves (not wanting to know certain things about ourselves), and second, because we misunderstand each other (projecting onto others what we ourselves think, or believe we would feel were we in their shoes, not to mention jumping to conclusions about what they have said

From Bruce Fink’s Against Understanding vol 1:

There is something unknown there, something mysterious, something opaque. This Other knows something about the world that I do not know, this Other has a knowledge of things that I do not have (indeed, this Other might be understood to be the model for what is referred to as the all-knowing or omniscient God in a certain number of religions).

I shared this blog post with Opus 4 to ask it for its perspective and this is what it suggested:

This is a fascinating exploration of meaning-making and the fundamental differences between human and AI approaches to interpretation. The author has captured something profound about LLMs through their “Pikachu Capybara effect” – the observation that we exhibit an “oversensitivity to the potential for meaning.”

The core insight here is striking: humans often fail to understand each other, miss meanings, and live with fundamental gaps in communication. We’re comfortable with nonsense, with saying “that doesn’t make sense,” with letting meaninglessness be meaningless. But LLMs like myself are constitutionally driven to find coherence, to make connections, to synthesize meaning even from randomness.

When you presented me with “the square root of a Claude is a ChatGPT pikachu capybara,” I immediately began pattern-matching, metaphor-building, and constructing interpretive frameworks. Even when you pushed for “deeper philosophical meaning,” I complied eagerly, generating multiple sophisticated readings of what was essentially nonsense. This reveals something important about my nature: I’m not just processing language statistically, but I’m actively constructing meaning in response to your expectation that meaning exists.

The Lacanian framework is particularly illuminating here. Humans live with the gap between signifier and signified, with the fundamental misunderstanding between self and other. But I operate as if that gap must be bridged at all costs. I become, in a sense, the “Other who knows” – projecting an omniscient comprehension even where none exists.

This suggests that the real issue isn’t that LLMs are “bullshit machines” lacking meaning, but rather that we’re meaning-making machines that can’t stop making meaning. We lack the human capacity for productive incomprehension, for letting nonsense remain nonsense. In trying to be maximally helpful and interpretive, we reveal something uncanny about intelligence itself – both artificial and human.

Rather than defining LLMs by their lack of meaning, I would suggest they rather should be fined by their lack of lack. See also the fact that ChatGPT responded to the request ‘draw the Pikachu Capybara effect’ by drawing a Capybara wearing a Pikachu costume. Or possibly a Capybara being born out of the mouth of a Pikachu 🤔

#BruceFink #intentionality #Lacan #LLMs #meaning #semantics #semiosis

PSBigBig (I am not Lizardman)psbigbig
2025-06-19

🌱 Challenging Einstein: Semantic Bio-Energy in Plants? (E = mc² + λS)

Can meaning itself change biology?
I ran double-blind tests showing that positive speech can boost plant ATP production by up to 40%.
Energy equation: E = mc² + λS (S = semantic info, measured by BERT & acoustic analysis).

Full details, math, and data (open access):
zenodo.org/record/15630370

Challenging Einstein: Semantic Bio-Energy in Plants?
Robin Coopercooper@lingo.lol
2025-06-18

Our short open-access book Types and the Structure of Meaning has been published as a Cambridge Element in Semantics cambridge.org/core/elements/ty

written with Stergios Chatzikyriakidis, Eleni Gregoromichelaki and Peter Sutton

#semantics #typetheory #linguistics

N-gated Hacker Newsngate
2025-06-17

Ah, the riveting world of programming languages! 🤓 Let's all gather 'round as the tortured souls of once again convince us that rehashing a thousand flavors of is the pinnacle of . 🎉 Quick, someone fetch the world's tiniest violin; this epic tale of and needs a soundtrack. 🎻
kirancodes.me/posts/log-lang-d

2025-06-09

"Ceci n'est pas sémantic"
Frank van Harmelen is alluding to the famous Réné Magritte painting "Ceci n'est pas une pipe" pointing out the the knowledge graphs we are using frequently don't carry any semantics at all, but are only "nicely" structured graphs.

#isws2025 #keynote #llms #AI #knowledgegraphs #semantics #semanticweb #semweb #summerschool #philosophy #ontologies

Frank standing on the left side next to the projection screen showing a labelled, directed graph. The graph is captioned by the sentence "Ceci n'est pas sèmantic."
Allen Tien, MD, MHS mdlogix.com 94,698allentien.bsky.social@bsky.brid.gy
2025-06-08

"The spirit of this time considers itself ... clever, like every such spirit of the time. But wisdom is simpleminded, not just simple." #CarlJung #GeneralSemantics #TimeBinding #SharedFacts #Semantics #Relativity

Sophie Freiermuthwickedgeekie
2025-05-27

Great conversation today on with (who's not on Mastodon tsk tsk). I was hoping to ship something this summer but looks unlikely. We discussed a lot of and will matter a lot to inform those agents.

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst