Talking about our newest paper because I think it's cool:
A normal constraint satisfaction problem assumes a set of variables and looks for assignments to those variables so that the constraint is satisfied. Implicit in this framing is that you know what the variables are.
As part of our research on how cognitive AI can have "the ability to follow the rules", one of the research problems we identified is approximately "learning the functions that map a constraint specification to potentially many places in an agent's state representations". Basically, we can't assume that map is "a given" in the agent's knowledge.
This map is what enables an agent to evaluate whether and where the actual state complies with the constraint because it enables assignment of the state values to the variables in the disembodied constraint specification.
We call it partial grounding of constraints and it looks somewhat like concept grounding or grounding in general.
#AI #SymbolGrounding #CognitiveSystems #AcademicChatter