#ComputationalNeuroscience

2025-05-03

Researchers in France are working on creating a French network of researchers to organize interaction, communication and training in #computationalneuroscience

If you are a CompNeuro working in France, consider joining, and registering to our mailing list: listes.services.cnrs.fr/wws/su

bsky.app/profile/lauradugue.bs
linkedin.com/posts/laura-dugu%

2025-05-03

🧠 Nouveau RĂ©seau Français De Neurosciences Computationnelles !🧠

Une initiative passionnante est en cours en France ! Des chercheurs établissent un réseau national dédié aux #NeurosciencesComputationnelles pour faciliter :

  • La collaboration entre Ă©quipes de recherche
  • Le partage de connaissances et de ressources
  • La formation de la prochaine gĂ©nĂ©ration de chercheurs
  • La visibilitĂ© de la recherche française dans ce domaine en pleine expansion

🔍 Vous travaillez en #ComputationalNeuroscience sur le territoire français ? Que vous soyez chercheur·e confirmé·e, post-doctorant·e, doctorant·e ou Ă©tudiant·e, rejoignez notre communautĂ© !

📝 Inscrivez-vous Ă  notre liste de diffusion pour recevoir les actualitĂ©s, Ă©vĂ©nements et opportunitĂ©s :

âžĄïž listes.services.cnrs.fr/wws/su

Ensemble, renforçons l'excellence française en #NeurosciencesComputationnelles !
#Recherche #Neuroscience #CNRS #Science #IA #IntelligenceArtificielle

2025-04-30

A few weeks ago, I shared a differential equations tutorial for beginners, written from the perspective of a neuroscientist who's had to grapple with the computational part. Following up on that, I've now tackled the first real beast encountered by most computational neuroscience students: the Hodgkin-Huxley model.

While remaining incredibly elegant to this day, this model is also a mathematically dense system of equations that can overwhelm and discourage beginners, especially those with non-mathematical backgrounds. Similar to the first tutorial, I've tried to build intuition step-by-step, starting with a simple RC circuit, layering in Naâș and Kâș channels, and ending with the full spike-generation story.

Feedback is welcome, especially from fellow non-math converts.
neurofrontiers.blog/building-a

#ComputationalNeuroscience #Python #hodgkinHuxleyModel #math #biophysics

From: @neurofrontiers
neuromatch.social/@neurofronti

Building a virtual neuron – part 2

Image credit: Ionut Stefan

It’s been a tad longer than I intended since our intro on differential equations came out, but hopefully that means you had some extra time for memory consolidation. Otherwise, you can refresh your memory here. Today it’s finally time to tackle the long-awaited virtual neuron. But before we jump in, we need to have a quick housekeeping chat. As you can already glimpse from the list below, we mean business this time, so I strongly recommend that you read this article in chunks. Then again, I’m just a disembodied voice on the Internet and I can’t tell you what to do.

  1. Defining the goal
  2. The plurality of virtual neurons
  3. The foundation model
  4. From cats to neurons
    1. Small detour # 1: hypothesized, but not modeled
    2. Small detour # 2: positive, yet negative?
  5. From neurons to circuits
  6. From measurements to interventions
  7. Virtual neuron 1 – linear, boring, and instructive
  8. Virtual neuron 2 – to the moon and beyond
    1. Small detour # 3: that Nernst guy
    2. Back to virtual neuron 2
  9. Virtual neuron 3 – other ions have joined the party
  10. The full model – everybody gets a function
  11. Still alive?

Defining the goal

First, we need to understand what we want to do. “Building a virtual neuron” sounds cool (well, about as cool as math can ever sound), but it tells us surprisingly little about the task. We need to define the level at which we build this neuron. Do we want to simulate every protein and ion, and all their interactions? I mean, maybe. I admit that does sound pretty cool, but would we be able to interpret the results? My computational neuroscience professor used to say: “If you build a simulation as complex as the system you’re studying, you now have two systems you don’t understand.” And leaving that aside, could we even construct such a simulation right now? Well, no, not really. So instead we need to define three things:

  1. what we want to do;
  2. what we can and want to get out of it;
  3. what we can realistically accomplish.

For today, we want to build a model capable of producing action potentials, just like real neurons (1). We want to use this model to understand how neurons produce these potentials and how they are affected by both external stimuli and ion channel properties (2). And we can realistically accomplish this with a run-of-the-mill laptop and our own brains (3).

The plurality of virtual neurons

There isn’t just one single way to simulate a neuron. In fact, there are a lot of options. If you don’t believe me, have a look here. Choosing a computational model is an act of balance between complexity and efficiency. On the one hand, we want something complex enough to capture what we’re interested in: for example, if we want to know what happens to a neuron when we mess with its calcium channels, we need a model that includes them. On the other hand, this model needs to run on the available hardware and we should be able to make some sense of its results. So if we only care about calcium channels, it’s not such a good idea to include 300 other types of ion channels.

The foundation model

For today, I’ve chosen the Hodgkin-Huxley (HH) model. As some of you might already know, this is kind of the bedrock of modern computational modeling, and often the first boss you will encounter if you ever attend such a course.

While arguably not the first computational model, the HH was pioneering as a quantitative, dynamic, biologically grounded one, and it remains remarkably elegant to this day. Of course, now it’s quite easy to look at it and think “well, big whoop, we already know how action potentials work”. But given the limited amount of information Hodgkin and Huxley had available at the time, it’s nothing short of fascinating how well the model reproduced empirical data and what predictions they were able to derive from it.

At the same time, coming from the biology side, I always had a bunch of questions about action potentials that remained largely unanswered until I made my way through the math jungle. For example, why do sodium (Na+) channels open slowly at first, then all at once? Why does the threshold for spike generation have that value and not another one? Why do potassium (K+) channels take so long to open? And why is it that we don’t always get one spike after another?

As we work our way through the model, we will be answering these questions and more. But similar to the previous article, we’ll start with a series of small, made-up examples (the code to follow along is here) and work our way up to the main beast. I hope that these examples bring clarity, but if they have the opposite effect, please let me know in the comments. That way, I can improve this guide (and future ones).

Throughout, I’ll try to highlight the underlying biology, as well as what Hodgkin and Huxley actually knew at the time. If you’d like a refresher on neuron structure and function, we do have this older post covering the basics, but I’ll try to weave those concepts in as we go.

From cats to neurons

Abstracting the movements of a cat to math is somewhat straightforward. If we get stuck in the equation, we have something tangible to go back to. So before we start with the math, let’s try to build the same kind of concreteness for neurons and action potentials.

We can begin from the same information Hodgkin and Huxley had available at the time. Neurons are enclosed by membranes, which usually block the movement of ions. Since the membrane is typically sealed, we can have different concentrations of ions on both sides: more Na+ outside, more K+ inside. While they didn’t yet know how these concentration gradients were maintained, HH recognized their importance.

They also observed that, if one were to place an electrode outside of the neuron and stick another one inside, a potential difference in voltage of about -65 mV could be measured (by the way, these days it’s also known the exact voltage difference varies by neuron type). In other words, the inside of the cell is more negative compared to the outside. Importantly, the value and its sign don’t matter that much, at least not for understanding the general principles. What matters is that there is a measurable difference and that sometimes there is a change in this difference.

If the membrane were forever sealed to the passage of any and all ions, then that would be the end of the story. We’d have no action potential to talk about (and we couldn’t anyway, because no intelligence, language, movement, nothing). But sometimes, the membrane allows ions to flow through it. You can imagine the ion concentrations we mentioned above as water stored in a tank. There’s much more Na+ outside the neuron than inside, so when the Na+ “tap” (i.e., ion channels) opens, Na+ rushes into the neuron, like water gushing into an empty chamber. This happens very fast and leads to a temporary reversal of the voltage difference sign: the inside becomes more positive than the outside. Then the Naâș tap closes and the K+ tap opens, allowing K+ to flow out and bring things back to normal.

This information is pretty much all we need for the HH model, although I’m sure you still have some questions.

Small detour # 1: hypothesized, but not modeled

We mentioned above that Hodgkin and Huxley didn’t know how the Na+ and K+ gradients were maintained. However, they hypothesized there must be some active mechanism that pushes Na+ out and brings K+ into the neuron, thus working to maintain the concentration gradients. Otherwise, each neuron would only have a few action potentials to fire before the ion concentration on both sides of the membrane equalizes.

And they were right. Years later, we found out that there are proteins embedded in the membrane, called ion pumps, that are open only on one side of the neuron at a time. They act kind of like a shuttle bus that only allows Na+ to board from the inside going out and K+ from the outside going in.

Small detour # 2: positive, yet negative?

I’m sure it’s not lost on any of you that: 1) both Na+ and K+ are positive ions, and 2) cells, including neurons, aren’t electrically charged. So how can we talk about a voltage difference?

There are a few key points here:

  1. overall, the amount of positive and negative charges is equal both inside and outside the membrane, but it’s the distribution of these charges close to the membrane that makes a difference;
  2. inside the neuron, there are also large negatively charged proteins which can’t leave the cell and tend to cluster close to the membrane;
  3. even though both Na+ and K+ each carry a +1 charge, the concentration of Na+ outside the cell is larger than that of K+ inside the cell (around 150 mM for Na+ vs 100 mM for K+, depending on neuron type). Additionally, the pump we mentioned earlier throws out 3 Na+ ions for every 2 K+ brought in, thus maintaining the imbalance;
  4. there are also some K+ channels that remain open at rest. Due to the K+ concentration gradient, some of it flows out of the neuron, which means that some positive charge trickles outward, leaving the inside slightly more negative relative to the outside.

The combination of these factors generates the voltage difference measured by Hodgkin and Huxley.

From neurons to circuits

Coming back to our neuron model, now that we have the biology basics, we can begin to abstract. But instead of inventing an entirely new mathematical framework to describe how neurons behave, Hodgkin and Huxley realized that it was easier to repurpose what was already in the physics of electric circuits.

All the elements we described above have an equivalent in a circuit:

  • since the membrane stores charge, it behaves like something called a capacitor, i.e. a device which stores charge by accumulating it on two closely spaced surfaces insulated from each other;
  • the only way for ions to passively go through the membrane is through ion channels, which are typically closed. In other words, the channels provide resistance to the flow of ions, so we can represent them through resistors;
  • we also explained that there are differences in the concentration of ions between the inside and the outside of the neuron and that these differences drive the ion flow, so the ion concentration differences are our voltage sources or batteries;
  • and finally, although not explicitly included in the HH model, the ion pumps which restore the concentration gradients represent the current sources in our circuit, pushing ions in a specific direction to keep the system going.

As I said, even at the time, there was already a lot of math for how to work with electrical circuits. And that’s the key for cracking our simulations.

From measurements to interventions

In the circuit above, we could measure the voltage difference of the inside compared to the outside of the membrane. In fact, that’s what Hodgkin and Huxley did at first. They used giant axons from squids and silver electrodes to measure the so-called membrane or resting potential, which we said sits at around -65 mV.

But measurements alone aren’t enough. And by itself, the neuron and its membrane potential at rest aren’t that exciting. We want action
potentials. Those happen when neurons receive stimuli or input. One could try to do these measurements in vivo, that is when the one neuron we measure receives input naturally, either from other neurons or from the environment. But in this particular situation, Hodgkin and Huxley wanted to have precise control over the neuron’s input and they wanted to use the circuit framework from above. So instead, they used another set of electrodes to directly inject current into the axon of an isolated neuron.

Now, looking at the circuit diagram, physics tells us that if we inject some external current (we’ll call it ) into this system before the point where the individual elements (capacitor and resistors) are branching out, this current will split to flow through each available path. So we’ll have a capacitive current and, for each type of channel, ionic currents, which for now we’ll lump under a generic . As nothing is lost in this idealized circuit, our original will be the sum of the currents flowing through the individual elements, so: .

Cool, but we actually care about voltage, right? That’s what the action potential is, a change in voltage difference between the inside and the outside of the neuron over time. Yes, and here’s how physics helps us again: it tells us that – our capacitive or membrane current, can be expressed in terms of the rate of change of the voltage, i.e. our old friend . Since we’re talking about membrane voltage, we’ll just rename x to . And the full formula is , where represents something called the membrane capacitance, and it’s just a constant, a number that we normally determine experimentally or read from a paper that already measured it. In this case, Hodgkin and Huxley measured and found it equal to 1 (, but don’t stress about the units yet; by the way, what you’ve just heard is the collective shudder of all the world’s physicists at the idea of not stressing about units).

With that, we can rewrite , and shifting the terms, we get . Since is a constant, you will often see it written on the same side as (basically, constant = we don’t care much about it), but to make it clearer, we can also isolate . This will be our stepping stone for the full model. The lefthand side of the equation won’t change anymore. That’s the potential we’ve been wanting to simulate for a while now. The righthand side will gradually expand in complexity until it allows us to get something looking like the image below:

Virtual neuron 1 – linear, boring, and instructive

In the equation , we already know that is a constant equal to 1 . is what we pump into the system and we have full control over it. For now, we will try out three values: 0, 1, and 2 mA/. tells us about how ions, like Na+ and K+, behave, but for now, we will completely ignore it by setting it to zero. So our equation reduces to or 0, 1 or 2 (mV), depending on which we pick. This is very similar to the first cat example from last time, except that our starting point, , is -65 mV.

But just because this example is so simple, it doesn’t mean we can’t extract any information from it. We observe that the higher the input current is, the faster our membrane voltage increases. And of course, if there is no input whatsoever, nothing happens.

We can also check what happens if we start from different values at (in this case, -100 mV, -65 mV, and 10 mV). And we’ll look at just one external input value, = 1 mA/. As you see below, not much. The line looks exactly the same, except that it starts from different values of . We’ll check this again in the more complex model and see if it holds.

Virtual neuron 2 – to the moon and beyond

Now it’s time to tackle . Instead of zero, we could give it another random value, like 3. But no matter what fixed value we give it, the only thing that would change in our equation would be how fast the membrane voltage increases. More importantly, we know this is unrealistic in neurons because when Na+ and K+ channels open and the ions travel from one side of the membrane to the other, the ionic currents also change.

That means needs to be not a constant, but a function. More specifically, a function which changes over time (and later, over voltage too). One such example would be – at every time step, our ionic current would be equal to the negative value of that time step. Our base equation would then transform into . For mA/, we would get the following:

We see that the membrane voltage now rises much faster, up to very unrealistic values (in practice, if we actually injected the current necessary for reaching such voltages, we’d fry the neuron long before getting there). And if we were to slightly vary either or as we did above, there would be barely any noticeable difference in the result.

But remember how we represented our ion channels through resistors? Similar to capacitors, there is also a formula that relates current and voltage for these elements: . is our membrane voltage, the one we’ve been plotting so far. So our base equation now expands into (I’ve moved to the lefthand side to avoid using too many brackets). is the conductance for that ion. Conductance is a measure of how easily electric current flows through a material. In our case, this means how easily the ions pass through their respective channels. For now, we will pretend that is a constant, like 0.1 (mS/).

And is our battery from the circuit above. It represents the equilibrium potential of each ion, what they aspire to, and the voltage at which the membrane would settle if there were no other ions around and if the membrane were permeable all the time. In this case, we don’t need to pretend: is always constant for a given ion type. For example, for Na+, is about +45 mV. If the membrane potential, , were equal to +45 mV, we would say that Na+ is at equilibrium and there would be no movement of Na+ ions across the membrane. In real neurons, this is never reached, since other ions have different equilibrium potentials (for example, K+ sits at around -82 mV), but we’ll learn more about that later.

Small detour # 3: that Nernst guy

But hold up: what does ion concentration have to do with voltage? And where do ion equilibrium potentials actually come from? Well, in practice, from neat little tables.

But conceptually, we need to make something clear, using Na+ as an example: we said that there are more Na+ ions outside than inside the neuron, so there is a higher concentration of Na+ on the outside of the membrane. If we open the tap, this concentration difference will push Na+ inside. But when does the pushing stop? Is it when the Na+ concentration is exactly equal on both sides of the membrane? It would be, if only Na+ were the only one around and there were no voltage difference between the two sides of the membrane.

But let’s imagine that we also have those negatively charged proteins from earlier. This changes the game, because even though the concentration of Na+ ions might equalize at some point, there would be another force pulling it in: the negative charge of the proteins, or the electrical gradient. Because these two forces compete, the actual voltage at which no Na+ moves around anymore is the one given above.

We can calculate this number from yet another equation that some guy named Nernst came up with: . R, T, z, and F are constants, so we again ignore them. What matters is that this formula allows us to relate the ion concentrations (outside) and (inside) the neuron to voltage, thus giving us the equilibrium potential of each ion.

Bonus: this nifty formula tells us why sudden influxes of K+ can kill you. When the concentration of K+ outside the neuron increases a lot, the equilibrium potential of K+ ends up being much higher than -82 mV. In turn, this messes with the generation of action potentials, thus impairing communication between neurons. Once we have the full HH model, we’ll be able to check exactly how this happens.

Back to virtual neuron 2

For now, we see that if we were to model just Na+ currents and assume a constant conductance (in this example, mS/), the membrane potential would eventually settle to the equilibrium potential of Na+.

This time, if we change our starting point , we observe a different behavior compared to the first virtual neuron: here, the membrane potential always settles at the Na+ equilibrium, regardless of whether we start from a value above or below that.

But what happens if we keep the resting state voltage the same and change the conductance ? A higher conductance means that Na+ ions barrel through channels quicker (because more channels are open, not because the ions move any faster). That translates into the equilibrium potential being reached sooner.

I want to stress here that conductance isn’t just an abstract thing that makes the graph sharper. In real life, alterations in Na+ channel conductance can have devastating effects. For example, tetrodotoxin, a powerful toxin derived from pufferfish, effectively decreases Na+ conductance to zero by blocking Na+ channels and preventing its influx into the cell. This is deadly. And in different types of epilepsy, Na+ conductance is again affected: either too high or too low, depending on the type of epilepsy. As we’ll see later, changes in conductance affect the properties of action potentials, such as shape and timing. At the level of the whole brain, this results in abnormal communication between neurons and can lead to the symptoms observed in epilepsy.

Moving on to varying the external input current , we see that the membrane potential no longer settles at the ion’s equilibrium potential, but at another value that changes with the strength of the external input . Looking again at our equation , we see that when is zero, the membrane voltage is only governed by . But once we inject a steady flow of current into this system, the balance point shifts higher or lower, depending on the sign of . This will be important for action potential generation later on.

Virtual neuron 3 – other ions have joined the party

Alright, but we know Na+ doesn’t act alone. There is at least a K+ current. There are other ions as well, but Hodgkin and Huxley lumped everything else that might act in a neuron under a so-called “leak” current that is modeled as an additional resistor.

Once we add the K+ and leak currents in our model ( and ), we now have a slightly longer differential equation for the membrane voltage:

.

Simulating this allows us to see that, like before, the membrane voltage settles at an equilibrium point. But this point is no longer equal to the equilibrium voltage of any single ion. Instead, it sits somewhere in-between. This in-between value is nothing more than the weighted average of the contributions of all ions to the membrane potential. The contribution of an ion is given by the product between its equilibrium potential and its conductance, so the full equation reads like this: .

We saw above that changing the Na+ conductance when only Na+ is present allows us to manipulate how fast we reach the equilibrium potential. But the equilibrium potential itself remains unchanged. But now we have more than one ion, each with their own conductance, and we see in the equation above that the membrane equilibrium potential takes into account conductances as well. So what happens if we change each ionic conductance individually?

We should be able to deduce this from the equation, but we’ll check it against the simulation results below. The blue line represents our original case from above. Since the K+ equilibrium potential is more negative than our original resting state potential , increasing the K+ conductance while keeping the Na+ conductance the same means that the membrane will settle at a new, more negative potential (orange line). In contrast, since the Na+ equilibrium potential is positive, increasing the ionic conductance while keeping the same means that our neuron’s equilibrium potential goes up and we also reach it faster (green line). Now, if we increase while maintaining this higher , our membrane resting potential comes down, closer to that of K+. But we still get there fast, since the Na+ conductance is so high (red line).

In principle, we could also play around with the leak conductance . However, as we will see later, in the HH model, the leak conductance is always assumed to be static, whereas and do change under certain conditions.

The full model – everybody gets a function

We’ve already added quite a few details to our model, but there’s still a bit to go on. So far, we have a simulation of the membrane potential which includes multiple ion channels. This model is capable of settling at an equilibrium point, the resting state potential, but it still doesn’t produce spikes yet. So let’s fix that. Fair warning, this next part is the trickiest (I know! As if the novel before was soooo easy!), so go slowly, pause often, and don’t worry if things take a few reads to click.

Key takeaway # 1: conductances are voltage-dependent
Let’s bridge biology and math now: we said that when the Na+ conductance increases (i.e. Na+ channels open), the membrane voltage also increases. But we also know from experiments that when the membrane voltage increases, K+ channels open. In other words, the K+ conductance increases. In math terms, that suggests conductance (for both Na+ and K+) is voltage-dependent.

Key takeaway # 2: there is a maximum conductance
Imagine all Na+ channels are open. Even then, there is still a limit to how much Na+ can pass through the membrane at every time step, because the ions need to wait for their turn to go through the channels, just like cars have to wait to pass through a crowded tunnel. That means conductance has a maximum value, which we can call . When all channels are open, for Na+ and similarly, for K+.

Key takeaway # 3: we can work directly with proportions of open channels
But what if only 50% of the channels were open? Well, the limit would be half of the maximum: . Why is it this relevant? Because instead of directly relating conductance to voltage, we can relate proportion of open (or closed) channels to voltage. The math is easier and it’s a bit more intuitive.

Putting it all together
First of all, since conductances are voltage-dependent and the membrane voltage changes over time, we actually have voltage- and time-dependent conductances. Important to note, only for Na+ and K+; we assume the leak conductance to be fixed.

Secondly, we work with the proportion of open channels, not with conductances directly. Let’s pause for a moment and think about what we want to model. We basically want a sort of push-pull mechanism, such that when the voltage goes up, the proportion of open Na+ channels goes up, and when the voltage decreases, the proportion of closed channels increases. And the same way for K+.

Let’s start with K+. We can denote the proportion of open K+ channels with n. The proportion of closed channels will be simply 1 – n (total minus how many are open). Since we’re interested in how this evolves over time, we need to bring back our differential equation friend, in this case . The push-pull mechanism we want can be written in the following form: or following the Hodgkin-Huxley convention: . There are two parts that matter here:

  1. the two functions A and B act like weights for the proportion of closed, respectively open, channels. A controls how fast closed channels open and B controls how fast open channels close;
  2. the above is not enough. A and B are voltage-dependent functions themselves and they need to be chosen in such a way that, when the voltage goes up, A goes up and B goes down, and vice-versa when the voltage goes down.

But how to choose them? Well, the equation above is called a first-order differential equation and has a known solution. Without going further into mathematical detail, Hodgkin and Huxley used that solution together with experimental measurements of K+ currents to derive specific formulas for and . I am including them here for completeness and because you will see them in the code, but there is no reason to stress over them. In practice, unless you use them on a daily basis, you’re just going to look them up when needed (and by the way, depending on the neuron type, the actual numerical values in these formulas will change): and .

(Side note: the sign convention. One thing to notice above is that we use both and V. That’s not a typo. Normally, we define the membrane voltage , so the membrane voltage is negative at rest. In the HH model, however, V is defined as . That means the voltage is shifted such that at rest, mV. And because all s and s were fitted to these shifted values, we need to take that into account when working with the original HH model.)

For Na+, they modeled the Na+ channel activation in a similar manner, except they called the proportion of activated channels m. Again, for completeness, the respective equations were and .

Now we almost have the full functioning HH model, but there are just a couple of minor tweaks left. Because Hodgkin and Huxley fitted their model to experimental data, they observed two interesting tidbits:

  1. the model fit better when the variable n was raised to the power of 4 and when m was raised to the power of 3. At the time, they didn’t know why that was the case, but in the meantime, we’ve found out that the K+ ion channel is made up of 4 subunits, each of which needs to be activated for the channel to allow the passage of K+. In that case, you can think of n as the proportion of channels where subunit 1 is activated (or the probability for this subunit to be activated). The proportion of channels where 2 subunits are activated is , and so on. Similarly, Na+ channels have 3 activation domains that need to be opened for Na+ to pass through the channel;
  2. when the membrane voltage was held constant at a high value, K+ kept flowing out of the cell until the voltage was allowed to return to normal. But for Na+, Hodgkin and Huxley observed a different behavior: Na+ flowed into the cell at first, then it stopped. The Na+ current sharply decreased and regardless of how long the voltage was kept high, the Na+ current didn’t increase anymore. To model this behavior, they introduced a second variable for Na+, called h, which they used to model the proportion of inactivated Na+ channels. This needed the same and functions, with and . Again, nowadays, we know that Na+ has an inactivation domain that rapidly blocks Na+ channels at high voltages and only unlocks them when the voltage goes down again. That’s also why action potentials cannot spread backwards from where they came from.

And that’s it, we now have a full HH model. Put all together, it looks like this:
,
,
,
.

Importantly, by itself, the model doesn’t really do anything. If the external input is zero and we start the model from an initial membrane voltage below a certain threshold (in this case, -60 mV), it quickly decays back to the resting state potential (which you can calculate yourself using the formula given above and the maximum conductances and ionic equilibrium potentials given in the code here.)

If we start the model above a certain threshold (for example, -50 mV), it will fire a single spike before going silent forever.

To get more than one spike, we need to drive it with external input current. So far, we’ve used constant current, and we’ll stick with that for today (in the next part, we’ll also try out time-varying currents). For a high enough current, we see that the model fires one action potential after the next. You can try it out for yourself to see what happens for different values of , and next time we’ll try a more systematic analysis as well.

Finally, we can inspect our gating variables m, h, and n, to see how they evolve over time. In the plot below, you see that the Na+ channel activation variable m (in blue), goes up really quickly – Na+ channels open fast; but it goes down just as quickly – they also close fast. The Na+ inactivation variable, h, quickly decreases during the spike – Na+ channels are blocked and cannot open again for some time. In the meantime, the K+ activation variable n goes up, lagging a bit behind m – K+ channels open more slowly and the membrane voltage goes back down.

Still alive?

I don’t know about you, but I’m tired. The good news is that now we have a functional HH model. Also good news is that we can do a lot of things with it, but unfortunately, that requires additional explanations, and I think we could all use a break. So I’ll see you for the next part. Until then, feel free to toy with the model parameters.

P.S.: If someone knows a better solution for displaying LaTeX equations in WordPress, do let me know. The current method is hurting my soul.

What did you think about this post? Let us know in the comments below. And if you’d like to support our work, feel free to share it with your friends, buy us a coffee here, or even both.

You might also like:

References
Goaillard, J.-M., & Marder, E. (2021). Ion Channel Degeneracy, Variability, and Covariation in Neuron and Circuit Resilience. Annual Review of Neuroscience, 44(1), 335–357. https://doi.org/10.1146/annurev-neuro-092920-121538

Hodgkin, A. L., Huxley, A. F., & Katz, B. (1952). Measurement of current‐voltage relations in the membrane of the giant axon of Loligo. The Journal of Physiology, 116(4), 424–448. Portico. https://doi.org/10.1113/jphysiol.1952.sp004716

Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4), 500–544. Portico. https://doi.org/10.1113/jphysiol.1952.sp004764

#computationalModeling #computationalNeuroscience #hodgkinHuxleyModel #math

A construction site for the brain, with a partly constructed brain viewed from the front right side. The brain is suspended on scaffolding and the frontal part of the right lobe is missing. On the left of the brain, there is a large black computer. The brain is hooked to the computer through wires, and the wires run further along to the right side of the image, where we see a monitor displaying some EEG traces and some circuits. In between the brain and the monitor is a large white scaffolding, with a TMS coil hanging from it. In front of the monitor there is a green man holding a neuron and in front of the man, there is a toolbox with ions and neurons. The background color is purple-ish?Illustration showing a biological cell membrane and its equivalent electrical circuit. On the left, a yellow lipid bilayer contains large blue channels embedded in the membrane, allowing movement of ions. Blue circles represent sodium ions (Naâș), pink circles represent potassium ions (Kâș), and green circles represent other ions. Large dark blue circles with negative signs depict trapped proteic anions inside the cell. Positive and negative charges accumulate on opposite sides of the membrane. On the right, the membrane is modeled as an electrical circuit with labeled Naâș channels (blue), Kâș channels (pink), other channels (green), voltage sources for each ion type, and an orange ion pump symbol. The diagram illustrates the ionic basis of membrane potential and its biophysical equivalent model.Plot of membrane voltage over time in response to an external current stimulus of 12 mA/cmÂČ. The x-axis shows time in milliseconds (ms), ranging from 0 to 15 ms, and the y-axis shows voltage in millivolts (mV), ranging from -80 mV to 40 mV. The blue curve starts at around -65 mV, rapidly rises to a peak near 40 mV around 2.5 ms, then falls sharply to a minimum near -80 mV around 4 ms, followed by a slow partial recovery. The graph illustrates the typical shape of an action potential generated in a neuron-like membrane model under stimulation.

How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning.

New preprint from @yang_chu.

arxiv.org/abs/2001.10605

Thread below 👇

#neuroscience #computationalneuroscience #compneuro #compneurosci

2025-04-24

What happens after Neuromatch Academy?
Meet our Impact Scholars đŸ€“

22 mini-seminars from early-career scientists across NeuroAI, DL, Climate, and more.

From data to real-world impact!

âžĄïž Watch here: youtube.com/playlist?list=PLkB

#ComputationalNeuroscience #DeepLearning #ClimateScience #NeuroAI

2025-04-04

I keep going back to this question about #TemporalCreditAssignment and #HippocampalReplay:
As an "agent" you want to learn the value of places and which places are likely to lead to reward;

-1) if a place leads to higher than expected reward, you'll want to propagate back the reward info from the reward throughout the places that led to the reward. If replay does that you should see an increase of replay at a new reward site and the replay sequences should start at the reward and reflect what you just did to reach it. Right?

-2) if a place leads to lower than expected reward, you'll also want to propagate that lowered value, pretty much in the same way, so if replay does that you should see a similar replay rate and content for increased OR decreased reward sites. Right?

-3) if a place has had unchanged reward for a while and you're just in exploitation mode (just going there again and again because you know that's the best place to go to in the environment) then you shouldn't need to update anything and replay rate should be quite low at that unchanged reward side. Right?

That's not at all what replay is doing IRL, so does that mean replay is not used for temporal credit assignment? Or did I (very likely) miss something?

#Neuroscience #ComputationalNeuroscience #DecisionMaking #Hippocampus

Preview of the talk I'm giving on Friday. #neuroscience #CompNeuro #ComputationalNeuroscience

Gru meme. Captions are "Used fancy new learning algorithm", "Learns the task well, output looks right" and then "Found a solution we could have found 100 years ago"
2025-03-17

Don't miss your chance to be part of Neuromatch Academy Courses—a one-of-a-kind learning experience where you'll dive into computational neuroscience, machine learning, and data science with expert mentors and a global community! 🌎

🔔 Last date to apply: March 23rd
💡 You automatically lose the chances you don’t take—so take yours NOW!

📌 Sign up today: neuromatch.io/courses/

#Neuromatch #ComputationalNeuroscience #MachineLearning #NeuroAI #NeuroscienceEducation

Time is ticking! Last date to apply is 23rd March. You automatically lost the chances you don't take! Sign up for Neuromatch Academy Courses
2025-03-12

When I transitioned from cognitive to computational neuroscience, I found myself in a bit of a bind. I had learned calculus, but I had progressed little beyond pattern recognition: I knew which rules to apply to find solutions to which equations, but the equations themselves lacked any sort of real meaning for me.

So I struggled with understanding how formulas could be implemented in code and why the code I was reading could be described by those formulas. Resources explaining math “for neuroscientists” were unfortunately quite useless for me, because they usually presented the necessary equations for describing various neural systems, assuming the presence of that basic understanding/intuition I lacked.

Of course, I figured things out eventually (otherwise I wouldn’t be writing about it), but I’m 85% sure I’m not the only one who’s ever struggled with this, and so I wrote the tutorial I wish I could’ve had. If you’re in a similar position, I hope you’ll find it useful. And if not, maybe it helps you get a glimpse into the struggles of the non-math people in your life. Either way, it has cats.

neurofrontiers.blog/building-a

#neuroscience #math #ComputationalNeuroscience #tutorial #academia

Ankur Sinha "FranciscoD"sanjay_ankur@fosstodon.org
2025-03-05

#NeuroML is participating in #GSoC2025 again this year under @INCF . We're looking for people with some experience of #ComputationalNeuroscience to work on developing #standardised biophysically detailed computational models using #NeuroML #PyNN and #OpenSourceBrain.

Please spread the word, especially to students interested in modelling. We will help them learn the NeuroML ecosystem so they can use its standardised pipeline in their work.

docs.neuroml.org/NeuroMLOrg/Ou

CC #AcademicChatter

I'm giving an online talk starting in 15m (as part of UCL's NeuroAI series).

It's on neural architectures and our current line of research trying to figure out what they might be good for (including some philosophy: what might an answer to this question even look like?).

Sign up (free) at this link to get the zoom link:

eventbrite.co.uk/e/ucl-neuroai

#neuroscience #compneuro #computationalneuroscience #neuroai

2025-02-04

With the current situation in the #US, several of my former colleagues there are looking for a #PostDocJob in #Europe, to do #BehaviouralNeuroscience or #ComputationalNeuroscience in #SpatialCognition (or adjacent).
Lots of hashtags I know..

Do you know a #EU or #UK #Neuroscience lab looking to hire a postdoc in these fields? Let me know and I'll pass it on to them!

Edit: adding #RodentResearch and #humanresearch for the species concerned (in this case)

Come along to my (free, online) UCL NeuroAI talk next week on neural architectures. What are they good for? All will finally be revealed and you'll never have to think about that question again afterwards. Yep. Definitely that.

đŸ—“ïž Wed 12 Feb 2025
⏰ 2-3pm GMT
â„č Details and registration: eventbrite.co.uk/e/ucl-neuroai

#neuroscience #CompNeuro #ComputationalNeuroscience #NeuroAI

2025-02-04

🚀 Neuromatch Academy 2025 is coming! 🚀

📱 Key Dates:
📍 Feb 24 – Applications Open!
📍 Mar 23 – Deadline (Midnight in your timezone)
📍 Mid-April – Decisions Announced
📍 Early May – Enrollment Deadline

Join a global community & dive into Computational Neuroscience, Deep Learning, Comp Tools for Climate Science, or NeuroAI! Don’t miss out—apply & tag a friend! 🌍✹

#NMA2025 #AIForAll #ComputationalNeuroscience #DeepLearning

Road map to important application dates. February 24, Applications Open. March 23, Applications Close. Mid-April, Application Decisions. Early-May, Enrollment Deadline.
Ankur Sinha "FranciscoD"sanjay_ankur@fosstodon.org
2025-01-28

We are very happy to provide a consolidated update on the #NeuroML ecosystem in our @eLife paper, “The NeuroML ecosystem for standardized multi-scale modeling in neuroscience”: doi.org/10.7554/eLife.95135.3

#NeuroML is a standard and software ecosystem for data-driven biophysically detailed #ComputationalModelling endorsed by the @INCF and CoMBINE, and includes a large community of users and software developers.

#Neuroscience #ComputationalNeuroscience #ComputationalModelling 1/x

An image with the NeuroML logo in the middle and the different steps of the model building life cycle around it: create, validate, visualize, simulate, fit, share, re-use.

What's the right way to think about modularity in the brain? This devilish 😈 question is a big part of my research now, and it started with this paper with @GabrielBena finally published after the first preprint in 2021!

nature.com/articles/s41467-024

We know the brain is physically structured into distinct areas ("modules"?). We also know that some of these have specialised function. But is there a necessary connection between these two statements? What is the relationship - if any - between 'structural' and 'functional' modularity?

TLDR if you don't want to read the rest: there is no necessary relationship between the two, although when resources are tight, functional modularity is more likely to arise when there's structural modularity. We also found that functional modularity can change over time! Longer version follows.

#Neuroscience #CompNeuro #ComputationalNeuroscience

New preprint! With Swathi Anil and @marcusghosh.

If you want to get the most out of a multisensory signal, you should take it's temporal structure into account. But which neural architectures do this best? đŸ§”đŸ‘‡

biorxiv.org/content/10.1101/20

#neuroscience #computationalneuroscience #compneuro

Are you interested in a masters or PhD in computational neuroscience? Don't miss @bccn_berlin's hybrid event!

🧠 International graduate program(s) computational neuroscience
📅 29-Jan-2025, 15-18 CET
📍 BCCN Lecture hall & Zoom

Learn more & register: bit.ly/4h0zj9y

#neuroscience #neuroinformatics #ComputationalNeuroscience

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst