#PaperThread

La Papeterie des Arceauxpapeteriedesarceaux@pixelfed.social
2025-03-23
Dimanche à la Papeterie. Méditative. Tissage de Shifu, Kami Itô pour le plaisir #papeteriedesarceaux #tissage #weaving #kamiito #shifu #paperthread #fildepapier #paperart #meditation #paperyarn #slowtech
Jonathan Z Simonjzsimon@fediscience.org
2025-03-13

#paperThread #auditory #neuroscience
Our latest paper just came out in the Journal of Neuroscience “Neural Dynamics of the Processing of Speech Features: Evidence for a Progression of Features from Acoustic to Sentential Processing.” We follow the cortical processing of four different speech-like stimuli (dushk88.github.io/progression-) through the brain, using MEG, from early auditory cortex to areas processing semantic-level information. The results show that each language-sensitive processing stage shows both an early (bottom-up-like) cortical contribution and a late (top-down-like) cortical contribution consistent with predictive coding. jneurosci.org/content/45/11/e1
fediscience.org/@jzsimon/11186

Jonathan Z Simonjzsimon@fediscience.org
2024-02-03

#neuroscience #paperThread A new #preprint by Dushyanthi Karunathilake doi.org/10.1101/2024.02.02.578
Language has a hierarchical structure, and some neural processing stages seem to align with these levels. Here we record MEG responses from subjects listening to a progression of speech/speech-like passages: speech-modulated noise; non-words with well-formed phonemes; shuffled words; and true narrative. We can then trace the hierarchy of neural processing stages, from acoustical to full language. 1/7

Examples of the 4 stimulus types employed in this research
Jonathan Z Simonjzsimon@fediscience.org
2023-12-01

#neuroscience #paperThread New paper in PNAS by recent PhD Dushyanthi Karunathilake! doi.org/10.1073/pnas.230916612
MEG responses from continuous speech listening lock to various stimulus features: acoustic, phonemic, lexical, & semantic. Could they provide an objective measure of when degraded speech is perceived as actually intelligible? This would give insight as to how the brain turns speech into language, and be a treasure mine for clinical populations ill-suited for behavioral testing. 1/5

2023-11-15

Interesting developments in subquadratic alternatives to self-attention based transformers for large sequence modeling (32k and more).

Hyena Hierarchy: Towards Larger Convolutional Language Models

arxiv.org/abs/2302.10866

They propose to replace the quadratic self-attention layers by an operator built with implicitly parametrized long kernel 1D convolutions.

#DeepLearning #LLMs #PaperThread

1/4

2023-11-10

It's not the first time! A dream team of Eve Fleisig (human eval), Adam Lopez (remembers the Stat MT era), Kyunghyun Cho (helped end it), and me (pun in title) are here to teach you the history of scale crises and what lessons we can take from them. arxiv.org/abs/2311.05020 🧵 #paperthread #LLMs

@andriy_mulyar
my Twitter feed is full of ph.d. students having an existential crisis
2023-11-09

5/5

Our dataset comprises also CT and MRI scans with patients lesions segmented by an expert.
This allowed us to look at the distribution of lesions cluster-wise, and validate the associations between symptoms and lesions.

Check our pre-print and comment, make questions, offer suggestions!
Although it is not simple to share data, we will release code soon, as a means to replicate the approach on similar data and more.
The link is already in the paper!
And let us know if you have data you'd like to share and analyse with our developing methods👨🏾‍💻

We are deciding on the best match for a journal to review and possibly publish this work, of which I am super proud and thankful to co-authors Andrea Zanola, Antonio Bisogno, Silvia Facchini, Lorenzo Pini, Manfredo Atzori, and Maurizio Corbetta!

#scicomm #paperthread #preprints #neuroscience #machinelearning #mri #stroke #clustering

2023-11-09

1/n
Our pre-print is finally out!
Here's my first #paperthread 🧵
In this work, co-authors and I clustered ischaemic stroke patients profiles, and recovered common patterns of cognitive, sensorimotor damage.

...Historically many focal lesions to specific cortical areas were associated with specific distinction, but most strokes involve subcortical regions and bring multivariate patterns of deficits.
To characterize those patterns, many studies have turned to correlation analysis, factor analysis, PCA, focusing on the relations among variables==domains of impairments...

medrxiv.org/content/10.1101/20

#stroke #neuroscience #machinelearning #clustering

Iris van Rooij 💭Iris@scholar.social
2023-10-22

📝 Now reading: "From empirical problem-solving to theoretical problem-finding perspectives on the cognitive sciences -- by @fedeadolfi #LauraVandeBraak, and @mariekewoe (2023, PsyArXiv) #PaperThread 🧵

doi.org/10.31234/osf.io/jthxf

2023-09-27

@jkanev I follow #NewPaper OR #preprint OR #PaperThread and it's *very* quiet.

2023-09-06

DINOv2: Learning Robust Visual Features without Supervision

Tricks applied to DINO and iBOT to learn robust/generic features for many downstream tasks

My summary on HFPapers: huggingface.co/papers/2304.071
arXiv: arxiv.org/abs/2304.07193
Demo: dinov2.metademolab.com/

#arXiv #PaperThread #FoundationModels

2023-09-05

LidarCLIP or: How I Learned to Talk to Point Clouds

Align LiDAR encoder to CLIP image encoder and you can query LiDAR through image similarity or even text.

My summary on HFPapers: huggingface.co/papers/2212.068
arXiv: arxiv.org/abs/2212.06858
PWC: paperswithcode.com/paper/lidar

#arXiv #PaperThread #FoundationModels

2023-08-29

T'S PAPER DAY!

Seidel & Prinoth et al. has just been accepted for publication in A&A and you can find it on arXiv already today: arxiv.org/abs/2308.13622

Let us tell you a bit more 👇🏼🧵

@JuliaVSeidel

#PaperThread #paper #exoplanets #observations #publication

2023-08-10

Hello, world!

Look, it's me 👀 My second first-author paper has just been accepted for publication in A&A and you can find it on arXiv already today ✨

Let me tell you a bit about it 👇🏼🧵

(okay a lot, it's a big kid)

arxiv.org/abs/2308.04523

#thread #PaperThread #astrodon #astronomy #observations

2023-07-28

SNAP: Self-Supervised Neural Maps for Visual Positioning and Semantic Understanding

Top-view and ground-view images can make neural maps that aid visual positioning.

My summary on HFPapers: huggingface.co/papers/2306.054
arXiv: arxiv.org/abs/2306.05407
PapersWithCode: paperswithcode.com/paper/snap-

#arXiv #PaperThread #NewPaper #SSL

2023-07-28

Diffusion Models Beat GANs on Image Classification

Extract features (activations) at a block at a diffusion time step gives a decent classifier.

My summary on HFPapers: huggingface.co/papers/2307.087

arXiv: arxiv.org/abs/2307.08702

Links: [PapersWithCode](paperswithcode.com/paper/diffu)

#arxiv #NewPaper #PaperThread #sd

2023-07-28

How is ChatGPT's behavior changing over time?

Monitors trends in performance of GPT-4 and GPT-3.5 (backend LLMs of ChatGPT) from March 2023 to June 2023 on diverse tasks.

My summary on HFPapers: huggingface.co/papers/2307.090
arXiv: arxiv.org/abs/2307.09009
GitHub: github.com/lchen001/LLMDrift

#arxiv #NewPaper #PaperThread #llm

2023-07-21

LightGlue: Local Feature Matching at Light Speed

Improving SuperGlue with changes to transformer (GNN matching). Iterative design gives speed boost.

My summary on HFPapers: huggingface.co/papers/2306.136

Links: [PapersWithCode](paperswithcode.com/paper/light), [GitHub](github.com/cvg/lightglue), [arxiv](arxiv.org/abs/2306.13643)

#lfm #arxiv #PaperThread #NewPaper #gnn

2023-07-19

Diffusion Hyperfeatures: Searching Through Time and Space for Semantic Correspondence

Fusing diffusion features into per-pixel image features (specific for downstream tasks)

@ducha_aiki's tweet: twitter.com/ducha_aiki/status/
arXiv: arxiv.org/abs/2305.14334
website: diffusion-hyperfeatures.github
GitHub: github.com/diffusion-hyperfeat

My summary on HFPapers: huggingface.co/papers/2305.143

#NewPaper #PaperThread #diffusion

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst