#SymbolGrounding

Dimitri Coelho Mollodcm@social.sunet.se
2025-06-10

A new, updated, streamlined, and generally improved version of The Vector Grounding Problem paper, joint work by @raphaelmilliere and me on the meaningfulness or else of LLM outputs and internal representations is now available on ArXiv.

arxiv.org/abs/2304.01481

New abstract in the thread below.

#philAI #LLMs #SymbolGrounding

2023-11-09

Talking about our newest paper because I think it's cool:

A normal constraint satisfaction problem assumes a set of variables and looks for assignments to those variables so that the constraint is satisfied. Implicit in this framing is that you know what the variables are.

As part of our research on how cognitive AI can have "the ability to follow the rules", one of the research problems we identified is approximately "learning the functions that map a constraint specification to potentially many places in an agent's state representations". Basically, we can't assume that map is "a given" in the agent's knowledge.

This map is what enables an agent to evaluate whether and where the actual state complies with the constraint because it enables assignment of the state values to the variables in the disembodied constraint specification.

We call it partial grounding of constraints and it looks somewhat like concept grounding or grounding in general.

#AI #SymbolGrounding #CognitiveSystems #AcademicChatter

2022-11-18

Galactica drew attention to #LargeLanguageModels -the utility (or not) of that specific application aside- people may find interesting two recent (and excellent!) talks at a Royal Society meeting in London that address issues cognitive scientists currently find interesting about these models- including the force of arguments on lack of #SymbolGrounding in these models:

The first is by Ellie Pavlick
youtu.be/1_wTMxdVgOI

the second by Noah Goodman
youtu.be/dYXxkS4rrAs

@cogsci

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst