#FeedForward

2025-04-14

#Zoomposium with Dr. #Patrick #Krauß: Building instructions for #artificial #consciousness

Transferring the various stages of #Damasio's #theory of consciousness 1:1 into concrete #schematics for #deeplearning. To this end, strategies such as #feedforward connections, #recurrent #connections in the form of #reinforcement learning and #unsupervised #learning are used to simulate the #biological #processes of the #neuronal #networks.

More at: philosophies.de/index.php/2023

or: youtu.be/rXamzyoggCo

photo of Patrick Krauss
2025-04-14

#Zoomposium mit Dr. #Patrick #Krauß: „Bauanleitung #Künstliches #Bewusstsein

Die verschiedenen Stufen von #Damasios #Theorie des Bewusstseins 1:1 in konkrete #Schaltpläne für ein #DeepLearning zu überführen. Hierzu werden Strategien wie #feedforward connections, #recurrent #connections in Form von #reinforcement learning und #unsupervised learning angewendet, um die #biologischen #Prozesse der #neuronalen #Netze zu simulieren.

Mehr auf: philosophies.de/index.php/2023

oder: youtu.be/rXamzyoggCo

Foto Patrick Krauß
2025-03-30

#Zoomposium with Dr. #Patrick #Krauß: Building instructions for #artificial #consciousness

Transferring the various stages of #Damasio's #theory of consciousness 1:1 into concrete #schematics for #deeplearning. To this end, strategies such as #feedforward connections, #recurrent #connections in the form of #reinforcement learning and #unsupervised #learning are used to simulate the #biological #processes of the #neuronal #networks.

More at: philosophies.de/index.php/2023

or: youtu.be/rXamzyoggCo

photo of Patrick Krauss
2025-03-30

#Zoomposium mit Dr. #Patrick #Krauß: „Bauanleitung #Künstliches #Bewusstsein

Die verschiedenen Stufen von #Damasios #Theorie des Bewusstseins 1:1 in konkrete #Schaltpläne für ein #DeepLearning zu überführen. Hierzu werden Strategien wie #feedforward connections, #recurrent #connections in Form von #reinforcement learning und #unsupervised learning angewendet, um die #biologischen #Prozesse der #neuronalen #Netze zu simulieren.

Mehr auf: philosophies.de/index.php/2023

oder: youtu.be/rXamzyoggCo

Foto von Patrick Krauß
2024-11-27

Collage, March 2019: "Gentlemen, it is the microbes who will have the last word." #collage #collageart #bibliomancy #art #feedforward

This is a collage artwork incorporating various visual elements, such as a black-and-white photograph of a woman under an intricately patterned umbrella, bold colors, and layered textures. The composition includes text fragments, such as, "Gentlemen, it is the microbes who will have the last word," adding a narrative or provocative statement to the imagery. There’s a mix of human figures, abstract elements, and a bird, creating a surreal, layered aesthetic.
RoundSparrow 🐦RoundSparrow
2024-06-26
Ada Czerwonogoraadita
2024-04-22

Escribí este brevísimo resumen del artículo de Hattie & Timperley (2007) para mi módulo de Evaluación en la Maestría en Didáctica de la Educación Superior (UCLAEH):

El poder de la retroalimentación y devolución formativa

yodemoliendohoteles.wordpress.

Jeroen Habetsjeroen@habets.dev
2024-02-24

Detailed #explanation of #AI #LLM using my favourite #database #PostgreSQL by Alex Bolenok quassnoi

Nicely describes how all the constituent pieces of an LLM come together:
#Tokenizer, #Embeddings, #Attention/#Masking, #Feedforward, #temperature, #Inference

explainextended.com/2023/12/31

Victoria Stuart 🇨🇦 🏳️‍⚧️persagen
2023-09-11

Addendum 10

1 Wide Feedforward All You Need
arxiv.org/abs/2309.01826

* 2 non-embedding components in transformer architecture: attention; feed forward network
* attention captures interdependencies betw. words regardless of position
* FFN non-linearly transforms ea. input token independently
* FFN (sig. fract. parameters) highly redundant
* modest drop in accuracy removing FFN on decoder layers & sharing single FFN across encoder

gtbarrygtbarry
2023-08-18

A jargon-free explanation of how AI large language models work

Word vectors - Humans represent words with letters. Language models use a long list of numbers

Each layer of an LLM is a transformer - Each layer takes a sequence of inputs—each word—and adds information

Feed-forward layers predict the next word

arstechnica.com/science/2023/0

Victoria Stuart 🇨🇦 🏳️‍⚧️persagen
2023-07-11

Absorbing Phase Transitions in Artificial Deep Neural Networks
arxiv.org/abs/2307.02284

To summarize, we believe that the this work places the order-to-chaos transition in the initialized artificial deep neural networks in the broader context of absorbing phase transitions, & serves as the first step toward the systematic comparison between natural/biological & artificial neural networks.
...

Phil McAleermcaleerp
2023-03-04

Colleagues get surprised when I say I enjoy “marking” and I always think that is because their focus is wrong; I see it as “giving feedback and feedforward” rather than just marking or grading. This blog has many excellent tips on providing effective and - despite the clunky name. I think seeing the process for what it really is could help lecturers get more out of it.

facultyfocus.com/articles/educ

The vOICe vision BCI 🧠🇪🇺seeingwithsound@mas.to
2023-02-02

Distinct early and late neural mechanisms regulate feature-specific sensory adaptation in the human visual system pnas.org/doi/10.1073/pnas.2216 fatigue and sharpening "at different points in the sensory processing cascade, likely reflecting distinct #feedforward and #feedback interactions"; #neuroscience

2023-01-28

#FediLZ #Feedback #FeedForward #matheedu

Ich habe Mal wieder mein Selbsteinschätzungsraster überarbeitet. Danke @noelte030 für die Inspiration durch seine Bücher.

Was haltet ihr davon? Verbesserungsvorschläge?

2023-01-25

#FediLZ #Feedback #FeedForward

Inspiriert durch einen Vortrag von @noelte030 wollte ich mein Selbsteinschätzungsraster hinsichtlich der #4K zu überarbeiten. Irgendwie stoße ich hier an meine Grenzen. Habt ihr Ideen?

2022-11-18

The #birdsite is currently a real time lab for #SystemsSafety and #resilience. The irony is that the thread to the system is a #cybernetic one. The application of a non-fitting #TheoryOfManagement created a #FeedForward self-amplifying #RaceToThebottom.

It's a safe bet that what happens these days will be subject to #management research for years to come, maybe even a classic.

washingtonpost.com/technology/

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst