#DeepNeuralNetworks

IndiScaleindiscale
2025-07-11

Do you have a recommendation for a provider (with , suitable for training )? We are looking for options with a maximum of:
- , low CO2 footprint,
-

Netherlands eScience CentereScienceCenter@akademienl.social
2024-12-09

🚀 We've released a new version of DIANNA, our open-source #ExplainableAI (#XAI) tool designed to help researchers get insights into predictions of #DeepNeuralNetworks.

What's new:
👉improved dashboard
👉extensive documentation
👉added tutorials

MORE: esciencecenter.nl/news/new-rel

2024-09-19

Does anyone know the URL for the "observatory" website (I think that's what they called it) where one of the AI/DNN labs had analysed various machine vision models and built a map of all of the nodes.

You could click on each node and see the images (and sometimes text) that triggered it, and also images that were generated when they excited that node while clamping others (like Deep Dreams)

I can't remember who it was and can't find it.

#AI #DeepNeuralNetworks #NeuralNets #YOLO #deepdream

Annual Computer Security Applications ConferenceACSAC_Conf@infosec.exchange
2024-08-29

Last in the session was Park et al.'s "Adversarial Perturbation-Based Fingerprinting to Identify Proprietary Dataset Use in #DeepNeuralNetworks", identifying stolen datasets even with different model architectures. (acsac.org/2023/program/final/s) 4/4
#DNN #AI

Park et al.'s "Adversarial Perturbation-Based Fingerprinting to Identify Proprietary Dataset Use in Deep Neural Networks"
Fabrizio Musacchiopixeltracker@sigmoid.social
2023-12-08

With the success of #DeepNeuralNetworks in building #AI systems, one might wonder if #Bayesian models are no longer significant. New paper by Thomas Griffiths and colleagues argues the opposite: these approaches complement each other, creating new opportunities to use #Bayes to understand intelligent machines 🤖

📔 "Bayes in the age of intelligent machines", Griffiths et al. (2023)
🌍 arxiv.org/abs/2311.10206

#DNN #NeuronalNetworks

Figure 2: Marr’s levels of analysis provide a framework for understanding information processing sys- tems such as the human brain or AI systems. Different kinds of computational models en- gage with these different levels – Bayesian models are typically defined at the computational level, while artificial neural networks explore hypotheses at the algorithmic and implemen- tation levels.
Death Star Robotdeathstarrobot
2023-04-24

I co-developed several new artificial neural network architectures with ChatGPT's help today. Muahahahaha! Yes novel new concepts turned into actual & actionable programming code. I realized that I'm going to have the first Self Aware Neural Network up and running before the end of 2023.

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2023-03-26

Why #DeepNeuralNetworks need #Logic:

Nick Shea (#UCL/#Oxford) suggests

(1) Generating novel stuff (e.g., #Dalle's art, #GPT's writing) is cool, but slow and inconsistent.

(2) Just a handful of logical inferences can be used *across* loads of situations (e.g., #modusPonens works the same way every time).

So (3) by #learning Logic, #DNNs would be able to recycle a few logical moves on a MASSIVE number of problems (rather than generate a novel solution from scratch for each one).

#CompSci #AI

Outline of Shea's talk.Two types of "representational transition" (e.g., a logical inference like modus ponens).

1. Content-specific

2. Non-content-specificWhat non-content specific transitions are useful for:
A. stuff way outside one's trained experience (e.g., learning)
B. Inferences from already stored data/memories (e.g., quickly generating novel conclusions from what you already know or identifying inconsistencies between one's beliefs to achieve reflective equilibrium).
Icelandic Vision Labicevislab@neuromatch.social
2023-03-09

Wow. In 24 hours, we have gone from zero to 4.4K followers, that‘s crazy. Thank you for a warm welcome and excellent tips. I gave up on replying to all of you after someone pointed out that I was spamming thousands of people – sorry! Also, please do not read too much into it if we do not respond or take a long time responding, we are a busy bunch and may simply sometimes miss your post or messages. Mastodon allows long posts so I am taking advantage of that, so here are a few things that you may – or may not – want to know.

—Who are we?—

Research in the Icelandic Vision Lab (visionlab.is) focuses on all things visual, with a major emphasis on higher-level or “cognitive” aspects of visual perception. It is co-run by five Principal Investigators: Árni Gunnar Ásgeirsson, Sabrina Hansmann-Roth, Árni Kristjánsson, Inga María Ólafsdóttir, and Heida Maria Sigurdardottir. Here on Mastodon, you will most likely be interacting with me – Heida – but other PIs and potentially other lab members (visionlab.is/people) may occasionally also post here as this is a joint account. If our posts are stupid and/or annoying, I will however almost surely be responsible!

—What do we do?—

Current and/or past research at IVL has looked at several visual processes, including #VisualAttention , #EyeMovements , #ObjectPerception , #FacePerception , #VisualMemory , #VisualStatistics , and the role of #Experience / #Learning effects in #VisualPerception . Some of our work concerns the basic properties of the workings of the typical adult #VisualSystem . We have also studied the perceptual capabilities of several unique populations, including children, synesthetes, professional athletes, people with anxiety disorders, blind people, and dyslexic readers. We focus on #BehavioralMethods but also make use of other techniques including #Electrophysiology , #EyeTracking , and #DeepNeuralNetworks

—Why are we here?—

We are mostly here to interact with other researchers in our field, including graduate students, postdoctoral researchers, and principal investigators. This means that our activity on Mastodon may sometimes be quite niche. This can include boosting posts from others on research papers, conferences, or work opportunities in specialized fields, partaking in discussions on debates in our field, data analysis, or the scientific review process. Science communication and outreach are hugely important, but this account is not about that as such. So we take no offence if that means that you will unfollow us, that is perfectly alright :)

—But will there still sometimes be stupid memes as promised?—

Yes. They may or may not be funny, but they will be stupid.

#VisionScience #CognitivePsychology #CognitiveScience #CognitiveNeuroscience #StupidMemes

Tero Keski-Valkamatero@rukii.net
2023-03-06

Through scaling #DeepNeuralNetworks we have found in two different domains, #ReinforcementLearning and #LanguageModels, that these models learn to learn (#MetaLearning).

They spontaneously learn internal models with memory and learning capability which are able to exhibit #InContextLearning much faster and much more effectively than any of our standard #backpropagation based deep neural networks can.

These rather alien #LearningModels embedded inside the deep learning models are emulated by #neuron layers, but aren't necessarily deep learning models themselves.

I believe it is possible to extract these internal models which have learned to learn, out of the scaled up #DeepLearning #substrate they run on, and run them natively and directly on #hardware.

This allows those much more efficient learning models to be used either as #LearningAgents themselves, or as a further substrate for further meta-learning.

I have an #embodiment #research on-going but with a related goal and focus specifically in extracting (or distilling) the models out of the meta-models here:
github.com/keskival/embodied-e

It is of course an open research problem how to do this, but I have a lot of ideas!

If you're inspired by this, or if you think the same, let's chat!

MARVIN MC CUTCHANmarvin@mastodon.wien
2023-01-01

Working with #datascience , #machinelerning and #geospatial #remotesensing data. Also research on #GeoAI and #deepneuralnetworks designs. I understand #gis #geoinformatics . You can check me out on Google scholar and LinkedIn.
Happy to make new friends here #gischat . Also, love #techno.

Tero Keski-Valkamatero@rukii.net
2022-12-22

@rachelwilliams, yes, the #DeepNeuralNetworks exhibit true #intuition and #creativity. However, the large amount of #compute required is because we are using traditional #computers which are #synchronous, #dense and #sequential to emulate these #NeuralNetworkArchitectures which are #asynchronous, #sparse and massively #parallel.
With proper #cores they should take much less power than the human #brain, which is 12 W.

ᛕᎥᕼᗷᗴᖇᑎᗴ丅ᎥᑕᔕKihbernetics@qoto.org
2022-12-18
Ralph058 (S/he/it) AF4EZRalph058@techhub.social
2022-12-12

@cloy I want to follow this thread. I'd be interested, too. I'm interested in #ADAS #ComputerVision #DataScience #DeepNeuralNetworks #JetsonNano #RaspberryPi #OAK-D #OpenCV and of course #Python #C++ (I don't know if it will take that last tag) #C

Paul Marrow 🇪🇺evopma@ecoevo.social
2022-12-07

I got interested in #BiologicallyInspiredComputing when I learned about #ArtificialLife #ALife. At that time the computing resources available were limited compared to today. Now we have #DeepNeuralNetworks #DNN but it is widely agreed (including by me) that they do not replace natural #cognition. #BiologicallyInspiredComputing can be used as an application technology, but how do we use #Computing to understand #Cognition?

Book on desk: "Artificial Life" edited by Christopher Langton. Proceedings of a workshop held at the Santa Fe Institute where the term "Artificial Life" was first coined.
gtbarrygtbarry
2022-11-21

Scientists Increasingly Can’t Explain How AI Works

Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability

vice.com/en/article/y3pezm/sci

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst