Patrick Mineault

Neuro AI, comp neuro, vision, Python, data science, open science, ML, brains. Previously engineer @ Google, Facebook. Updates from xcorr.net

Patrick Mineault boosted:
2023-05-04

Convolutional neural networks with retinal-like pre-processing are fun.

Unlocking the Secrets of the Primate Visual Cortex: A CNN-Based Approach Traces the Origins of Major Organizational Principles to Retinal Sampling
biorxiv.org/content/10.1101/20

2023-04-23

@Kayson @marcusghosh Very cool! Really enjoyed the talk. There's a nice range of related ideas in this space that I think are underexploited for neuroscientific purposes: optimal brain damage (LeCun, Denker & Solla 1989), LIME (christophm.github.io/interpret) and SHAP values, under different sets of assumptions.

Patrick Mineault boosted:
2023-04-22

Does anyone else feel that modeling work has a shorter lifespan than experimental work? When I'm searching the literature, I'm less likely to be interested in a modeling paper from e.g. 20 years ago than an experimental paper from the same time.

Modeling work is so dependent on the assumptions and mindsets (and computational tools) of the time, it seems it best serves to move thought forward in the moment, but not (at least on average) last a long time.

2023-04-21

@Kayson @marcusghosh Looks awesome! I hadn't seen this. Will take a look.

2023-04-21

I've been meaning to write this post for a long time: what does it mean for a neural network to be like the brain? I get into the nitty gritty of scores that compare a neural net vs. the brain. xcorr.net/2023/04/20/how-can-a

Patrick Mineault boosted:
2023-04-19

New job opening at the Center for Open Science: DIRECTOR OF ENGINEERING.

See more information and apply here: cos.io/careers

2023-04-19

The links of the day bot @lotd can help you see cool links that were shared on a Mastodon server that day. I've been tweaking it to add more features:

  • It's now better at grabbing and summarizing webpages using Selenium, the hit rate is better
  • It grabs tl;dr from page's meta information (og) if available
  • it doesn't repost news articles, only science stuff
  • it links back to the original post

Let me know if you enjoy it! It's been a fun GPT-4-based project.

2023-04-19

I've been trying out this new linter for Python: super fast! Also just the right level of verbose to catch errors early astral.sh/ruff

2023-04-17

@elduvelle added that, should work tomorrow

2023-04-08

How much energy does ChatGPT use? I estimate that thousands of GPUs were used in February just to serve responses, nevermind training. My blog post xcorr.net/2023/04/08/how-much-

2023-03-30

Does GPT-4 have common sense? Not really! My investigation on xcorr xcorr.net/2023/03/30/does-gpt-

2023-03-24

I made myself a little bot with GPT4's help to scrape links which are shared daily on neuromatch.social to make sure I don't miss out on the latest scientific research. Check it out here: neuromatch.social/@lotd

Patrick Mineault boosted:
Spencer LaVere Smithsls@neuromatch.social
2023-02-25

@NicoleCRust @DiedrichsenJorn David Poepple has spoken about “conceptual resolution” (akin to spatial and temporal resolution, not resolution like “solution”). It’s a nice tool because it allows for degrees, rather than stark categories like “idea”“theory” or “model”. join.substack.com/p/will-we-un

Patrick Mineault boosted:
2023-02-25
Patrick Mineault boosted:

Quasiuniversal scaling in mouse-brain neuronal activity stems from edge-of-instability critical dynamics

pnas.org/doi/10.1073/pnas.2208

Patrick Mineault boosted:
2023-02-13
2023-02-06

Dall-E and Stable Diffusion can generate fanciful images, but could they be a little brain-like? Maybe! My long read on diffusion models in neuroscience.

xcorr.net/2023/02/06/denoising

Patrick Mineault boosted:
2023-02-04

Looking for feedback on some new thoughts about Big Ideas in brain/mind research.

I've spent quite a long time researching and thinking about the history of brain/mind research in terms of the Big Ideas that have emerged. Pre-1960, it's pretty easy to list the big ideas that researchers had reached consensus around. Since 1960, that's harder to do. There's plenty of consensus around new facts (like umami is supported by receptor X on the tongue), but it's difficult to regard the things that brain researchers agree on as new, big ideas. At first, I (mis)interpreted this as a paucity of new ideas, but I no longer think that's correct - I've found a ton. Instead, I now believe that they are there but we haven't arrived at consensus around them.

I'm wondering: Why might have researchers arrived at more consensus around Big ideas introduced 1900-1960 vs 1960-2020? Obviously there's the filter of history and the fact that it takes time to work things out. But is there more to it than that? For example, have the biggest principles already been discovered? And so we are left with more of a patchwork quilt?

A sample of big ideas pre-1960ish with general consensus
*) Nerve cells exist (it's not a reticulum)
*) Neurons propagate info electrically and then chemically between them
*) DNA > RNA > Protein is a universal genetic code or all living things
*) Explaining behavior needs intermediaries between stimuli and responses (cognitive maps/minds)

A sample of big ideas with no general consensus introduced post-1960ish:
*) Cortical function emerges from repetitions of a canonical element
*) The brain is optimized for goal-directed interactions with the environment in a feedback loop (prediction/embodiment/free energy)
*) The brain is a complex system with emergent properties that cannot be understood via reductionist approaches
*) Fine structural detail in the brain (the connectome) matters for brain function

I'd love to hear your thoughts.

Patrick Mineault boosted:
2023-01-25

Schmidhuber and Grossberg have finally collided on the Connectionists mailing list, paradoxically teaming up to assemble a kind of grievance Voltron.

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst