#CoevolutionaryAlgorithm

2024-07-14
The lower order comes up unexpectedly (to me) in coevolutionary algorithms (what I studied in my PhD), but really in search and optimization more broadly.

Say you have a notion of "context", and a way of ordering these so that some contexts are larger, more expansive than, or "above" others. And let's say in each context, there is a set of things that are identifiable as "best". I'm being vague because you can instantiate this basic idea pretty broadly. For instance, maybe the contexts are states of information in a search algorithm and "best" refers to the possible solutions that seem best in each state of information; as you search, you change (increase) your state of information, and might change your might about which possible solutions are the best one. As another example, the contexts could be possible worlds and "best" refers to which propositions are true in each possible world; as you progress from one possible world to the next, you might change your mind about what propositions are true.

Anyway, with that simple setup you can associate to each thing the set of all contexts in which it appears best. This set could be empty or could be very large or anything in between. Then the lower order shows up as a weak preference relationship among all the things: one thing is lower preference than another if, for each context in which it appears best, there's a larger or equal context in which the other thing seems best. Put differently, any time you think the first thing is best, there's a way to increase your context such that the other thing appears best. This is exactly the lower order between the sets of contexts in which each thing seems best. If the set of contexts in which one thing seems best is higher up the lower order (šŸ˜) than the set of contexts in which the other seems best, then the former thing is weakly preferred to the latter.

The intuition in a search setting is that contexts are states of information, a kind of compendium of what you've learned so far in your search. If x and y are possible solutions, and for every context (state of information) in which you think x is the best there is always a bigger context--i.e., with more information--in which you think y is best instead, you ought to prefer y to x. The rationale is that any time you think x is best there's a way to learn a little more and change your mind to think y is best instead, which justifies preferring y to x.

Applied to modal logic, this notion corresponds to validity: if in every possible world where the proposition p is true there is an accessible world in which proposition q is true, then "p implies possibly q" is true in every world (valid).

The appearance of "possibly" is suggestive I think, and concords with this being a weak preference. "Necessarily" would be a strong preference, but I'd expect (in the sense of demand) a search process follow such a preference directly.

#math #ComputerScience #search #CoevolutionaryAlgorithm #SolutionConcept #ModalLogic

2024-07-06
Doing a bit of math for a paper I'm considering sending somewhere. This one has been brewing since 2007 or so but for a variety of reasons I never finished the work.

#SlowScience #math #CoevolutionaryAlgorithm #SolutionConcept
Picture of a blackboard with some math scribbled on it. Below in the foreground there's a computer screen with some of the math typeset. I'm happy to describe to math to anyone who's interested but I'm not going to attempt to put a more detailed description here.
2024-06-26
I have so many mixed feelings about point-free styles of programming or reasoning that I keep returning to. On the one hand, point-free style can be an extremely concise, elegant, and at times incisive way to write down an idea. On the other hand, if you've ever tried to write APL for instance, it can be challenging to get right; and if you've ever tried to read someone else's APL, it can be mind-breaking. It's a serious question whether one should bother expressing things in this style and take the risk that it won't be read or received by many other people.

I have my own offering in this area: Sā†¾ā‹„āˆ‚/āˆ‚ . This is a relation between candidate solutions of a coevolutionary algorithm where āˆ‚ is the solution concept ( https://bucci.onl/notes/solution-concepts ) and S is a set candidate solutions. Given a candidate solution s, this relation indicates all the actual solutions that lie "above" it according to the solution concept. ↾ is the shrink operator, and / is right residual. I've written a paper spelling out the ā‹„āˆ‚/āˆ‚ part of that expression, though I didn't use these terms or symbols at the time, that weighed in around 6 pages. In https://bucci.onl/notes/monotonic-solution-concepts , I touch on this paper and subject; the weak preference order described there is captured by ā‹„āˆ‚/āˆ‚ .

A concise description of what shrink and right residual do, and why this expression is doing what I claim it does, would require roughly the same amount of space, maybe a bit less. Mu and Oliveira have written a few papers on the shrink operator; right residual is textbook relational algebra.

So the six symbols in Sā†¾ā‹„āˆ‚/āˆ‚ are condensing many tens of pages of natural language + math. In principle, once you knew āˆ‚, ā‹„,↾, and /, you should be able to deduce what that full expression does. Following Mu and Oliveira, you should also be able to deduce an algorithmic implementation! But who the heck would put themselves through all that besides me? Who knows! 🤷

#APL #PointFree #programming #CoevolutionaryAlgorithm #RelationalAlgebra

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst