Have your LLMs take a hit at anything non trivial involving complex analysis. I will not say anything more, time to brush up on Cauchy Goursat. Vibe math with lots of alcohol 🤣😎
Division Chief, Nephrology, University of New Mexico
Posts:
Data Science, Bioinformatics, Computing, Statistics
#RNAsequencing #Nanopore
#GLMMs #Rstats #Bayesian
#Perl #HPC #Statistics #BigData
#fedi22 #Rstats #Clang
Have your LLMs take a hit at anything non trivial involving complex analysis. I will not say anything more, time to brush up on Cauchy Goursat. Vibe math with lots of alcohol 🤣😎
One of the benefits of #LLMs when used as information retrieval systems and not as slop generators is that one can discover things that are skipped in formal curricula eg I would never have run into Buckingham's work without Grok's hallucinations
https://en.wikipedia.org/wiki/Buckingham_pi_theorem without
@soonix yeap
Some of my discord buddies are showing neither alignment nor engagement with the vision of the promised land of #Rust.
Should I cut them out of my life for safety ?
Sounds about right ....
Performant Data Reductions with #Perl
Video of a talk I delivered remotely at the London Perl and #Raku Workshop in 2024
#commandline #datascience #openmp
https://www.youtube.com/watch?v=u_CkgLTeR4g
Strong words and very likely to be true
The amount of garbage findings in the literature out there due to the insensitivity of checks for rank deficiency using the QR decomposition through #lapack must be quite high.
One could be regressing y against x and a clone of x and not even realize it for some high value statistical models
There is actually no system that *reliably* eliminates hallucinations in these statistical systems that are best thought of as Bloom filters over large vector databases.
Remember it takes only one critical failure to lose street credo and this is going to be the Thermopylae of people who hope to use these tools to circumvent the lack of any #programming experience or domain familiarity
So if you are asking me how things may play out with #AI and #programming here is a partially informed opinion: domain experts with some coding knowledge or coding experts with some domain knowledge will absolutely rock this. Everyone else will be screwed for different reasons:
1) domain experts without any coding knowledge will eat the humblepie of coding hallucinations and
2) coding experts without any domain knowledge will be humiliated by domain hallucinations
Regarding #AI #hallucinations: I am writing a demo code (in #Rstats) for a statistical paper and I needed to write some linear algebra stuff (I barely use R for this, so I don't keep that part of the language in working memory). Even with the chatbots having scrubbed all the linear algebra books there were still issues getting the *algorithms correctly*. But once I gave it the algorithms it did ok (except when it reversed the order of matrix multiplication lol).
Look at what I found at #thriftbooks...
Probably from a library , original 1971 edition (published two years before I was born!) and one of the very good (?best) and clear books written on the topic of #linearmodels
Happy numpty uses software that relies on SVD or other rank deficient aware fitting routines, but does not check for rank deficiency and does not account for it when reporting results. People are not ready for the implications of this.
TL;DR Most EV batteries will last longer than the cars they’re in. Battery degradation is at better (meaning: lower) rates than expected. Slow charging is better. Drive EV and don’t worry about your battery.
„Our 2025 analysis of over 22,700 electric vehicles, covering 21 different vehicle models, confirms that overall, modern EV batteries are robust and built to last beyond a typical vehicle’s service life.“
A society of engineers would acknowledge this limitation and use LLMs as accelerants the way we use high temperature settings in simulated annealing (SA) global optimization schemes: as a quick way to generate an approximate answer, and then painstakingly (the "cooling scheme" in SA) refine the answer by making it more precise.
Unfortunately, we are not a society of engineers, but a culture of Dunning Kruger susceptibles. Enjoy your dopamine fix from the slop now.
5/5
There are also technical issues that make LLMs dissimilar to Bloom filters, but they win on using natural language as a query language; it frees people from formulating a very complex query in a formal language, reducing the barrier of asking questions for non-experts in a field. Furthermore, the answer is also not formulated in a formal language or served in a technical manner, even further reducing the barriers in interpreting answers.
3/5
Vector Database = encodes information using more or less artificial, possibly statistical descriptions of recorded facts about the objects in the database.
Natural language = imprecise language ie cannot have formal proofs about internal consistency or ambiguity of the query or the query result.
Bloom filter = sensitive but not specific way to search vector databases. Designed to not have false negatives (i.e. not miss), but will generate false hits (akin to hallucinations).
2/5
A very brief, somewhat technical take on Large Language Models (LLMs). They are *effectively* analogs of Bloom filters for searching vector databases using natural language interfaces, and reporting back using natural language.
This statement gives you all you need to know about potential and pitfalls, as long as you abstract the keywords to their essentials ....
1/5
@dlakelan consider repeat test at 24hr while you are still within the window for antivirals