#SharpSAT

2024-01-21

#SharpSAT worked, sort of! I got the same result as my Python code and it only took 61x as long!

This is using github.com/Laakeri/sharpsat-td. The instance came out as 364 variables, 19038 clauses (yuck) and I ran with -decot 1 -decow 120 -cs 6500. I don't understand if the debug output is giving me any clues about how to speed things up.

Screenshot showing same result (913,625,856 solutions) using the SharpSAT-TD solver and custom Python code to run the same enumeration.  The SharpSAT-TD run took 2m25.390s while my Python code took 2.367s.

There is a bunch of debug output from SharpSAT-TD like cache miss rate and memory usage. It says it made 16,769,702 decisions.
2024-01-20

Weekend project: try to solve some #combinatorics #enumeration problems by reduction to #SharpSAT. (Which, to be clear, I thought was unlikely to succeed!)

I picked c2d reasoning.cs.ucla.edu/c2d/ because it scored highly in the 2020 Model Counting Competition arxiv.org/abs/2012.01323 but I am not sure this is the same version. The one I got is dated 2005 and was 32-bit only. It ran out of memory on this 364-variable 942-clause instance (corresponding to 6 playing cards chosen from a standard 52-card deck.)

Looking at the 2023 competition instead, I think I should try SharpSAT-TD github.com/Laakeri/sharpsat-td but it is not as well documented. For example, I don't know if it supports the "eclauses" (exactly-one clauses) extension of the Dimacs CNF format.

#Satisfiability

Output from running c2d.  It shows some statistics about the CNF input, some debugging output I don't understand, and a log of its memory allocations until it exits.

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst