I have thought that Google's apparent lagging in applying #ML to its services has been partly the result of being realistic: #llms just is not a good model for search, because of hallucinations, and Google engineers know this.
But Deep Mind keeps finding ways to make ML useful: #alphafold is now showing some potential worth through collaborations with Lilly and Novartis.
#funsearch is showing that the combination of traditional machine reasoning (good, old-fashioned AI, or #gofai) might work as a proof-assistant, and now they've combined symbolic reasoning with a LLM to tackle International Math Olympiad geometry problems :
https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/
Here, the LLM suggests adding constructs to the problem, and the reasoner explores the implications of that construct for solving the problem.
One cool thing: the proofs produced are perfectly readable and human verifiable.
Another cool thing: it trains itself by producing random diagrams, deriving relationships (I suspect there's a lot of algebra involved in this step), then producing proofs for these relationships.
Anyway, this work seems real, and plausibly hype-reduced, and looks like it earns the sobriquet of "#AI"