Jelle Zuidema

Willem Zuidema. Associate Professor of Natural Language Processing, Cognitive Modelling & Explainable AI, at the Institute for Logic, Language & Computation, University of Amsterdam.

Jelle Zuidema boosted:
Marianne de Heer Klootsmdhk@scholar.social
2025-07-05

Want to learn how to analyze the inner workings of speech processing models? šŸ”

Check out the programme for our tutorial, taking place at this year's Interspeech conference in Rotterdam: interpretingdl.github.io/speec

The schedule features presentations and interactive sessions with a great team of co-organizers: Charlotte Pouw, Gaofei Shen, Martijn Bentum, Tom Lentz, @hmohebbi, @wzuidema, @gchrupala (and me!). We look forward to seeing you there 😃

#SpeechTech #SpeechScience #Interspeech2025

overview diagram visualizing various interpretability techniques for speech models
Jelle Zuidema boosted:
2025-02-11

A.s. donderdag spreekt de Commissie Digitale Zaken met diverse stakeholders over digitale soevereiniteit. Experts @bert_hubert Hubert, Reijer Passchier en Paul Timmers maken zich ernstige zorgen.
ibestuur.nl/artikel/actie-nodi

Jelle Zuidema boosted:
2025-02-11

"As the fediverse continues to grow and evolve, publishers are setting up shop in this new ecosystem. Some outlets have a fediverse strategy that complements their continued activity on traditional social media channels. Others have chosen to abandon traditional channels entirely in order to build a presence that’s more aligned with their values on the open social web."

medium.com/fedi-curious/lesson

Jelle Zuidema boosted:
Caroline de Gruytereurocaro
2025-02-11

Elon Musk is for America what Cecil Rhodes was for the British Empire: an oligarch with far-reaching powers bestowed on him to help the state to grab as much of the world’s waterways, land, resources and labor as it can
My latest for @foreignpolicy

foreignpolicy.com/2025/02/07/e

foreignpolicy.com/wp-content/u

Jelle Zuidema boosted:
2024-11-16
Tot slot vereisen modellen als o1 veel meer computerkracht dan de traditionele, zowel bij het trainen als het gebruik. ā€˜Kortom: een model dat nog steeds confabuleert, maar wĆ©l overtuigender op mensen overkomt Ć©n nog veel meer energie verbruikt dan zijn toch al energieslurpende broertjes. What could go wrong?’, vraagt Dingemanse zich retorisch af.
2024-11-15

@kimvsparrentak Ik ben erbij! Benieuwd naar de discussies, en ik baal al bij voorbaat dat ik niet bij @bert_hubert in de parallel-sessie kan zijn...

Jelle Zuidema boosted:
2024-11-15

my current thoughts on the Bluesky boom: good for them!

My time here has brought home to me how much people want very different things from micro blogging: some want a feed of fast moving, short, quippy, posts with reach (essentially early Twitter), others want slower, thoughtful (and maybe kinder) discussion, with privacy and control

if that splits into different platforms with different culture (and design), that may ultimately be helpful, and if the two systems have some degree 1/2

2024-09-21

@Rachel_Thorn It sounds like you're looking for an activist rather than an expert... Japan has many excellent Natural Language Processing experts, and even if they are fascinated by LLMs and not simply 'anti', they might be a more worthwhile addition to your symposium than you give them credit for. If I were preparing for the impact of a tornado or hurricane, I'd rather want a proper meteorologist at my symposium (even one fascinated about storms!) than an anti-hurricane activist.

Jelle Zuidema boosted:
Marianne de Heer Klootsmdhk@scholar.social
2024-07-09

✨ Do current neural speech models show human-like linguistic biases in speech perception?

We took inspiration from classic phonetic categorization experiments to explore whether & where sensitivity to phonotactic context emerges in Wav2Vec2 models šŸ”
(w/ @wzuidema )

šŸ“‘
arxiv.org/abs/2407.03005

ā¬‡ļø

paper title: Human-like Linguistic Biases in Neural Speech Models: Phonetic Categorization and Phonotactic constraints
2024-06-18

@ionica @martijnkleppe

Gefeliciteerd - heel benieuwd wat het gaat brengen!

LinkedIn geeft intussen nog wat nuttige extra insights :).

2024-04-29

@pbloem Some serious subtooting happening here.

2024-04-08

Not just "she says, he says" but an investigation of who has the better arguments. Not just professional AI-debaters, but people that actually do research on AI or on the impact it has. Not just "Hinton predicts the end of humanity", but some serious detail on how negative consequences may come about. Not just "there are optimists, pessimists and skeptics", but an analysis of who profits from all the hype, and who may be deliberately fueling it and why.

2/2

2024-04-08

Imagine being "chief features writer" for the Financial Times, and for your feature on Artificial Intelligence hype you paste together a couple of quotes from the usual suspects you know from Twitter: Marcus, Bender, Chollet, ...

Henry Mance gets away with it, but I think the debate on AI & society deserve a wider set of voices, and journalists that dig a little deeper. 1/2

ft.com/content/648228e7-11eb-4

2024-04-08

@jasmijn02 heftich!

2024-03-25

@deevybee "guest authorship" is shockingly common, I'm afraid, to the extent that many established academics don't even see a problem anymore with having their name on papers without any real contribution to the content, and many early career researchers think it's just the way it works.

One quibble with the linked article, though: if asking "occasional incompetent questions" is already an obstacle to authorship, many of us --me included-- can't coauthor anything anymore... šŸ˜€

Jelle Zuidema boosted:
2024-03-25

In a new paper, published today in Current Biology, we analyse the genome of renowned composer Ludwig van Beethoven using a polygenic index related to musicality, as a way to illustrate the limits of genetic predictions at the individual level. Beethoven, one of the most celebrated musicians in history, scored unremarkably, ranking between the 9th & 11th percentile based on modern samples. We explain why this is no surprise & how it can provide a valuable teaching moment on the complex relationships between DNA & behaviour.
An interdisciplinary collaboration across two Max Planck Institutes (Psycholinguistics in Nijmegen & Empirical Aesthetics in Frankfurt), University of Amsterdam, Karolinska Institute, Vanderbilt University and others.
#MastodonScience #science #music #genetics #genomics
@mpi_nl @maxplanckgesellschaft

authors.elsevier.com/sd/articl

On the left is the opening paragraph from the paper, with the title "Notes from Beethoven’s genome".
Rapid advances over the last decade in DNA sequencing and statistical genetics enable us to investigate the genomic makeup of individuals throughout history. In a recent notable study, Begg et al. used Ludwig van Beethoven’s hair strands for genome sequencing and explored genetic predispositions for some of his documented medical issues. Given that it was arguably Beethoven’s skills as a musician and composer that made him an iconic fi gure in Western culture, we here extend the approach and apply it to musicality. We use this as an example to illustrate the broader challenges of individual-level genetic predictions.
On the right is the main figure from the paper which shows how the polygenic index of Beethoven for  the music-related skill of beat synchronization ranks between the 9th and 11th percentile of that for modern samples
2024-03-20

Haha, here's the commentary in full, a lot shorter than I expected. Content-wise, I rate my predicions as 2/3 correct:

(1) Not explicitly there, suprisingly. They just say infants can learn from less data.
(2) Spot on: point three of the comment.
(3) Implicit in point (1) of the commentary: language is hierarchical, LLMs only probabilistic.

As usual (but length is a good excuse, just this time), the commentary doesn't engage with the neural network literature aware of all those arguments.

Brief comment in Nature saying language is hierachical and generative, unlike the 'probilistic learning' of LLMs, and that LLMs can learn unlearnable languages.
2024-03-20

(1) LLMs cannot learn subtle syntactic constraints. Illustrated with an example where they fail to complete a complex sentence (e.g., with a double embedding) or fails to say which one of two is grammatical.

(2) LLMs do not reproduce human inabilities to learn certain formal structures. Illustrated with an example from a programming language that LLMs learn perfectly, but naive human subjects fail at.

(3) This should be no surprise, because LLMs are next-word predictors that cannot do *merge*.

2024-03-20

I just got a notification about this commentary published in Nature: "Three reasons why AI doesn’t model human language", by Johan J. Bolhuis, Stephen Crain, Sandiway Fong & Andrea Moro.

I have seen so many commentaries by this team and their various coauthors (including, most prominently, Noam Chomsky) that I think I can spell out what they are going to say *before* reading the paper. So here are my three pregistered predictions: 1/3

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst