#deepRead

2023-10-13

Back in the days of 2021
there was a lovely evaluation paper:
Automatically identifying label errors
Improving score's reliability
Finding example's difficulty
Active Learning

aclanthology.org/2021.acl-long

@par @hoyle
#machinelearning #evaluation #IRT #LLM #deepRead

2023-08-30

Did you know:
Evaluating a single model on HELM took
⏱️4K GPU hours or 💸+10K$ in API calls?!
Flash-HELM⚡️can reduce costs by X200!
arxiv.org/abs/2308.11696

#deepRead #machinelearning #evaluation #eval #nlproc #NLP #LLM

2023-08-09

The newFormer is introduced,
but what do we really know about it?

@ari and others
imagine a new large-scale architecture &
ask how would you interptret its abilities and behaviours 🧵
arxiv.org/abs/2308.00189
#deepRead #NLProc #MachineLearning

2023-03-20

@mega Linear transformations can skip over layers, even till the end

We can see 👀 what the network 🧠 thought!
We can stop🛑 generating at early layers!

arxiv.org/abs/2303.09435v1

#NLProc #deepRead

2023-03-20

🔎What's in a layer?🌹🕵🏻‍♀️

Representations are vectors
If only they were words...

Finding:
Any layer can be mapped well to another linearly
Simple, efficient & interpretable
& improves early exit

arxiv.org/abs/2303.09435v1
Story and 🧵
#nlproc #deepRead #MachinLearning

2023-03-15

Mindblowing pretraining paradigm

Train the same model to predict the two directions separately
Better results, more parallelization

arxiv.org/abs/2303.07295
#deepRead #nlproc #pretraining #machinelearning

2023-01-23

3 reasons for hallucinations started
only 2 prevailed

Finding how networks behave while hallucinating, they
filter hallucinations (with great success)

arxiv.org/abs/2301.07779
#NLProc #neuralEmpty #NLP #deepRead

Otte Oldschooledudoc@mstdn.ca
2022-12-27

I’ve just spent the morning going through the #mastodon news feed, and I thoroughly enjoyed it. I honestly can’t remember when the last time was I got completely immersed in behind-the-story analysis in this way. Well done, #mastodon, and thank you.
#mastodonnews #deepread #news #newsanalysis

2022-12-07

What neurons determine agreement in multilingual LLMs?

#deepRead but some answers:
Across languages-2 distinct ways to encode syntax
Share neurons not info

Autoregressive have dedicated synt. neurons (MLM just spread across)

@amuuueller@twitter.com yu xia @tallinzen@twitter.com #conllLivetweet2022

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst