Why #DeepNeuralNetworks need #Logic:
Nick Shea (#UCL/#Oxford) suggests
(1) Generating novel stuff (e.g., #Dalle's art, #GPT's writing) is cool, but slow and inconsistent.
(2) Just a handful of logical inferences can be used *across* loads of situations (e.g., #modusPonens works the same way every time).
So (3) by #learning Logic, #DNNs would be able to recycle a few logical moves on a MASSIVE number of problems (rather than generate a novel solution from scratch for each one).



