#NeuralRepresentations

2023-12-22

Research in mechanistic interpretability and neuroscience often relies on interpreting internal representations to understand systems, or manipulating representations to improve models. I gave a talk at the UniReps workshop at NeurIPS on a few challenges for this area, summary thread: 1/12
#ai #ml #neuroscience #computationalneuroscience #interpretability #NeuralRepresentations #neurips2023

Slide: exciting recent results in representational alignment... but what does it all *mean*?
Illustration: Figure from a recent survey paper: https://arxiv.org/abs/2310.13018 showing a 3 x 3 grid of illustrations from papers in cognitive science, neuroscience, and machine learning that used methods of measuring, bridging, or increasing representational alignment between different systems.
Fabrizio Musacchiopixeltracker@sigmoid.social
2023-12-11

How to measure (dis)similarity between #NeuralRepresentations? This work by Harvey et al. (2023) ( @ahwilliams lab) illuminates the relation between #CanonicalCorrelationsAnalysis (#CCA), shape distances, #RepresentationalSimilarityAnalysis (#RSA), #CenteredKernelAlignment (#CKA), and #NormalizedBuresSimilarity (#NBS):

🌍 arxiv.org/abs/2311.11436

#CompNeuro #Neuroscience #NeurIPS2023

Fabrizio Musacchiopixeltracker@sigmoid.social
2023-10-04

Looking very much forward to the upcoming #iBehave seminar with Carsen Stringer ( @computingnature ) on “Unsupervised pretraining of #NeuralRepresentations for #TaskLearning” 👌

⏰ October 06, 2023, at 12 pm
📍 online
🌏 ibehave.nrw/news-and-events/ib

#Neuroscience #CompNeuro

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst