Who invented deep residual learning?
https://people.idsia.ch/~juergen/who-invented-residual-neural-networks.html
#HackerNews #Who #invented #deep #residual #learning? #residuallearning #deepneuralnetworks #machinelearning #AIhistory #innovation
Do you have a recommendation for a #CloudComputing provider (with #GPU, suitable for training #deepNeuralNetworks)? We are looking for options with a maximum of:
- #GreenIT, low CO2 footprint, #sustainability
- #DataPrivacy
🚀 We've released a new version of DIANNA, our open-source #ExplainableAI (#XAI) tool designed to help researchers get insights into predictions of #DeepNeuralNetworks.
What's new:
👉improved dashboard
👉extensive documentation
👉added tutorials
MORE: https://www.esciencecenter.nl/news/new-release-of-escience-centers-explainable-ai-tool-dianna/
Does anyone know the URL for the "observatory" website (I think that's what they called it) where one of the AI/DNN labs had analysed various machine vision models and built a map of all of the nodes.
You could click on each node and see the images (and sometimes text) that triggered it, and also images that were generated when they excited that node while clamping others (like Deep Dreams)
I can't remember who it was and can't find it.
Last in the session was Park et al.'s "Adversarial Perturbation-Based Fingerprinting to Identify Proprietary Dataset Use in #DeepNeuralNetworks", identifying stolen datasets even with different model architectures. (https://www.acsac.org/2023/program/final/s321.html) 4/4
#DNN #AI
With the success of #DeepNeuralNetworks in building #AI systems, one might wonder if #Bayesian models are no longer significant. New paper by Thomas Griffiths and colleagues argues the opposite: these approaches complement each other, creating new opportunities to use #Bayes to understand intelligent machines 🤖
📔 "Bayes in the age of intelligent machines", Griffiths et al. (2023)
🌍 https://arxiv.org/abs/2311.10206
Noch verstehen wir das Bewusstsein nicht.
I co-developed several new artificial neural network architectures with ChatGPT's help today. Muahahahaha! Yes novel new concepts turned into actual & actionable programming code. I realized that I'm going to have the first Self Aware Neural Network up and running before the end of 2023. #neuralnetworks #ai #chatgpt #gpt4 #openAI #deepneuralnetworks #selfawarenetworks #selfawareness #neuralnetworks #ai #chatgpt #gpt4 #openAI #deepneuralnetworks #selfawarenetworks #selfawareness
Referenced link: https://techxplore.com/news/2023-03-architecture-combines-deep-neural-networks.html
Discuss on https://discu.eu/q/https://techxplore.com/news/2023-03-architecture-combines-deep-neural-networks.html
Originally posted by Phys.org / @physorg_com: http://nitter.platypush.tech/TechXplore_com/status/1641437632524697602#m
RT by @physorg_com: An #architecture that combines #deepneuralnetworks and vector-symbolic models @NatMachIntell https://www.nature.com/articles/s42256-023-00630-8 https://techxplore.com/news/2023-03-architecture-combines-deep-neural-networks.html
Why #DeepNeuralNetworks need #Logic:
Nick Shea (#UCL/#Oxford) suggests
(1) Generating novel stuff (e.g., #Dalle's art, #GPT's writing) is cool, but slow and inconsistent.
(2) Just a handful of logical inferences can be used *across* loads of situations (e.g., #modusPonens works the same way every time).
So (3) by #learning Logic, #DNNs would be able to recycle a few logical moves on a MASSIVE number of problems (rather than generate a novel solution from scratch for each one).
Wow. In 24 hours, we have gone from zero to 4.4K followers, that‘s crazy. Thank you for a warm welcome and excellent tips. I gave up on replying to all of you after someone pointed out that I was spamming thousands of people – sorry! Also, please do not read too much into it if we do not respond or take a long time responding, we are a busy bunch and may simply sometimes miss your post or messages. Mastodon allows long posts so I am taking advantage of that, so here are a few things that you may – or may not – want to know.
—Who are we?—
Research in the Icelandic Vision Lab (https://visionlab.is) focuses on all things visual, with a major emphasis on higher-level or “cognitive” aspects of visual perception. It is co-run by five Principal Investigators: Árni Gunnar Ásgeirsson, Sabrina Hansmann-Roth, Árni Kristjánsson, Inga María Ólafsdóttir, and Heida Maria Sigurdardottir. Here on Mastodon, you will most likely be interacting with me – Heida – but other PIs and potentially other lab members (https://visionlab.is/people) may occasionally also post here as this is a joint account. If our posts are stupid and/or annoying, I will however almost surely be responsible!
—What do we do?—
Current and/or past research at IVL has looked at several visual processes, including #VisualAttention , #EyeMovements , #ObjectPerception , #FacePerception , #VisualMemory , #VisualStatistics , and the role of #Experience / #Learning effects in #VisualPerception . Some of our work concerns the basic properties of the workings of the typical adult #VisualSystem . We have also studied the perceptual capabilities of several unique populations, including children, synesthetes, professional athletes, people with anxiety disorders, blind people, and dyslexic readers. We focus on #BehavioralMethods but also make use of other techniques including #Electrophysiology , #EyeTracking , and #DeepNeuralNetworks
—Why are we here?—
We are mostly here to interact with other researchers in our field, including graduate students, postdoctoral researchers, and principal investigators. This means that our activity on Mastodon may sometimes be quite niche. This can include boosting posts from others on research papers, conferences, or work opportunities in specialized fields, partaking in discussions on debates in our field, data analysis, or the scientific review process. Science communication and outreach are hugely important, but this account is not about that as such. So we take no offence if that means that you will unfollow us, that is perfectly alright :)
—But will there still sometimes be stupid memes as promised?—
Yes. They may or may not be funny, but they will be stupid.
#VisionScience #CognitivePsychology #CognitiveScience #CognitiveNeuroscience #StupidMemes
Through scaling #DeepNeuralNetworks we have found in two different domains, #ReinforcementLearning and #LanguageModels, that these models learn to learn (#MetaLearning).
They spontaneously learn internal models with memory and learning capability which are able to exhibit #InContextLearning much faster and much more effectively than any of our standard #backpropagation based deep neural networks can.
These rather alien #LearningModels embedded inside the deep learning models are emulated by #neuron layers, but aren't necessarily deep learning models themselves.
I believe it is possible to extract these internal models which have learned to learn, out of the scaled up #DeepLearning #substrate they run on, and run them natively and directly on #hardware.
This allows those much more efficient learning models to be used either as #LearningAgents themselves, or as a further substrate for further meta-learning.
I have an #embodiment #research on-going but with a related goal and focus specifically in extracting (or distilling) the models out of the meta-models here:
https://github.com/keskival/embodied-emulated-personas
It is of course an open research problem how to do this, but I have a lot of ideas!
If you're inspired by this, or if you think the same, let's chat!
https://github.com/f/awesome-chatgpt-prompts/blob/main/README.md#prompts
I found this awesome repository of ChatGPT prompt ideas! This tool is more powerful than I'd ever imagined.
#ai #aiforgood #aiethics #airesearch #aitools #artificialintelligence #chatbots #deeplearning #deepneuralnetworks #digitaltransformation #machinelearning #ml #mlops #neuralnetworks #nlp #predictiveanalytics #web
Working with #datascience , #machinelerning and #geospatial #remotesensing data. Also research on #GeoAI and #deepneuralnetworks designs. I understand #gis #geoinformatics . You can check me out on Google scholar and LinkedIn.
Happy to make new friends here #gischat . Also, love #techno.
@rachelwilliams, yes, the #DeepNeuralNetworks exhibit true #intuition and #creativity. However, the large amount of #compute required is because we are using traditional #computers which are #synchronous, #dense and #sequential to emulate these #NeuralNetworkArchitectures which are #asynchronous, #sparse and massively #parallel.
With proper #cores they should take much less power than the human #brain, which is 12 W.
@cloy I want to follow this thread. I'd be interested, too. I'm interested in #ADAS #ComputerVision #DataScience #DeepNeuralNetworks #JetsonNano #RaspberryPi #OAK-D #OpenCV and of course #Python #C++ (I don't know if it will take that last tag) #C
I got interested in #BiologicallyInspiredComputing when I learned about #ArtificialLife #ALife. At that time the computing resources available were limited compared to today. Now we have #DeepNeuralNetworks #DNN but it is widely agreed (including by me) that they do not replace natural #cognition. #BiologicallyInspiredComputing can be used as an application technology, but how do we use #Computing to understand #Cognition?
Scientists Increasingly Can’t Explain How AI Works
Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability
#DeepNeuralNetworks #artificialintelligence #ai #machinelearning #data #bigdata #tehcnology #tech #innovation
https://www.vice.com/en/article/y3pezm/scientists-increasingly-cant-explain-how-ai-works
Referenced link: https://hackernoon.com/spoken-language-understanding-slu-vs-natural-language-understanding-nlu
Discuss on https://discu.eu/q/https://hackernoon.com/spoken-language-understanding-slu-vs-natural-language-understanding-nlu
Originally posted by HackerNoon | Learn Any Technology / @hackernoon@twitter.com: https://twitter.com/hackernoon/status/1582630163283873794#m
Differences between SLU (Spoken Language Understanding) and NLU (Natural Language Understanding). Top FOSS and paid engines and their approach to SLU. - https://hackernoon.com/spoken-language-understanding-slu-vs-natural-language-understanding-nlu #deeplearning #deepneuralnetworks