#ActivationFunction

Victoria Stuart 🇨🇦 🏳️‍⚧️persagen
2023-07-12

...
Addendum

Forward-Forward Algorithm
medium.com/@Mosbeh_Barhoumi/fo

The forward-forward algorithm uses a custom loss function that compares the mean square value of the activations for positive and negative samples.
The network optimizes this loss function by performing gradient calculations and optimization steps on the trainable weights of the dense layer.
...

Nafnlaus 🇮🇸 🇺🇦nafnlaus@fosstodon.org
2022-12-08

5. "Why would one avoid using a linear #ActivationFunction in a #NeuralNetwork?" #AI

No, #ChatGPT3 #GPT3. The derivative of a linear activation function is *always* positive; it has no vanishing gradient. The problem it has is that you can't backpropagate (constant derivative) and can mathematically reduce a network with linear functions down to a single layer.

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst