#OverFitting

2025-12-18

Why Does A.I. Write Like … That?

Sam Kriss for the New York Times:

"""
According to the data, post-ChatGPT papers lean more on words like “underscore,” “highlight” and “showcase” than pre-ChatGPT papers [..] And “delve” [..] shot up by 2,700 percent.
"""

nytimes.com/2025/12/03/magazin

#EmDash #linguistics #overfitting #ElaraVoss #LLM #NYTimes #SamKriss

2025-10-12

Eine große Fehleinschätzung ist, dass #KünstlicheNeuronaleNetzwerke umso besser werden, je komplexer sie sind und je größer der Datensatz ist, mit dem sie trainiert werden. Die aktuell völlig unterschätzte Problematik von #Overfitting & #Overtraining sind potentielle Treiber des nächsten KI-Winters. #justsaying

2025-09-15

Nachdem hier in #Waldkirch schon seit Jahren die #Feigen keimen, habe ich jetzt auch die direkte Bestätigung, dass sich #Befruchter angesiedelt haben.

In dem Kontext musste ich meiner Mutter eine Belehrung über #Bias und #Overfitting geben, denn #KI-#Bilderkennung klassifiziert das als #Feigengallwespe. Meiner bescheidenen Laienmeinung nach handelt es sich allerdings nicht um so eine, da die Antennen m.E. völlig anders aussehen. Das System könnte also durch den Hintergrund fehlgeleitet worden sein.

Laut Wikipedia gibt es aber auch andere #Erzwespen-#Arten, die als #Feigenwespe leben. Ich würde vermuten, dass es sich um ein verwandtes #Insekt handelt, allerdings ist die FGW nach meinem Verständnis die einzige in Europa heimische Art. Also hat die KI vielleicht doch Recht.

#BlastophagaPsenes #Ficus #FicusCarica #Erzwespe #Insekten #Neophyt #Neophyten #Agaonidae

Nahaufnahme eines kleinen Insekts auf einer von Wespen angenagten, grünen Feige.
Winbuzzerwinbuzzer
2025-07-16
Dennis Alexis Valin Dittrichdavdittrich@fediscience.org
2025-03-17

When Dimensionality Hurts: The Role of #LLM Embedding Compression for Noisy Regression Tasks d.repec.org/n?u=RePEc:arx:pape
"… suggest that the optimal dimensionality is dependent on the signal-to-noise ratio, exposing the necessity of feature compression in high noise environments. The implication of the result is that researchers should consider the #noise of a task when making decisions about the dimensionality of text.

… findings indicate that sentiment and emotion-based representations do not provide inherent advantages over learned latent features, implying that their previous success in similar tasks may be attributed to #regularisation effects rather than intrinsic informativeness."
#ML #autoencoders #Overfitting

2025-01-03

'On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training', by Chen Liu, Zhichao Huang, Mathieu Salzmann, Tong Zhang, Sabine Süsstrunk.

jmlr.org/papers/v25/22-0950.ht

#adversarial #overfitting #robustness

2024-12-13

"Three married couples. Aren't we just boring now?" joked Mia.
"You're frowning," Ari told Jin.
"Mia's #overfitting. She's too used to seeing threats to relax to realise Tom's mum is fond of her."
"Of course she is, Tom loves Mia. How could his mum not love her too?" #vss365

Daniele de Rigodderigo@hostux.social
2024-09-26

1/

Recent commentary [1]:
escalating concern over the use of the more powerful #chatbots when they are used to go beyond the #knowledge of the human expert who uses them, rather than for simply pre-processing in a controlled way within the domain of human-expert knowledge.

1. ⁠What is often called "hallucination/confabulation” (i.e. severe #extrapolation #uncertainty and #overfitting by the chatbot model) is apparently becoming increasingly realistic with a declining human ability to detect it

2024-08-07

I’m listening to #MITtechReview [PODCAST] Large language models can do jaw-dropping things. But nobody knows exactly why. (7 Aug 2024, 26 min)
pca.st/episode/527cdfed-17ed-4

#edtechSR #MediaLit #AI #lanhuage #learning #DeepLearning #magic #alchemy #OverFitting

AI image created by Wes Fryer with Ideogram:
ideogram.ai/g/FItzSD2BQJiJtpb7

A futuristic, intricate machine with glowing elements is at the center, surrounded by numerous screens displaying cosmic and abstract designs in a dimly lit, high-tech setting.
pablolarahpablolarah
2023-12-29
Magenta text on light pink:
Eigensolutions
composability as the antidote to overfit
2023-12-23

'Benign Overfitting of Constant-Stepsize SGD for Linear Regression', by Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade.

jmlr.org/papers/v24/21-1297.ht

#overfitting #overparameterized #sgd

2023-11-21

🤖💡 Ever struggled with overfitting in machine learning? It can lead to poor performance and inaccurate predictions. Learn more here 👉 ak-codes.com/overfitting/

Joxean Koret (@matalaz)joxean
2023-09-30

One question for the people: what approach do you use to determine if a decision trees or a random forest approach should work better? Do you simply try both approaches and use whatever seems to work better?

According to what I read, decision trees are more prone to overfitting, while random forest is a more complex approach. Which means little to me 😅

New Submissions to TMLRtmlrsub@sigmoid.social
2023-09-02
Published papers at TMLRtmlrpub@sigmoid.social
2023-08-29

Logistic-Normal Likelihoods for Heteroscedastic Label Noise

Erik Englesson, Amir Mehrpanah, Hossein Azizpour

Action editor: Bo Han.

openreview.net/forum?id=7wA65z

#label #classification #overfitting

New Submissions to TMLRtmlrsub@sigmoid.social
2023-08-07

Label Noise-Robust Learning using a Confidence-Based Sieving Strategy

openreview.net/forum?id=3taIQG

#label #labels #overfitting

Published papers at TMLRtmlrpub@sigmoid.social
2023-08-04

Learning Augmentation Distributions using Transformed Risk Minimization

Evangelos Chatzipantazis, Stefanos Pertigkiozoglou, Kostas Daniilidis, Edgar Dobriban

Action editor: Andriy Mnih.

openreview.net/forum?id=LRYtNj

#augmentation #augmentations #overfitting

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst