I feel like if I were an engineer who worked for years on visual AI models like #Sora, #DallE, #DeepDream, etc, I would by now have developed brain damage or some sort of dissociative disorder from viewing so much derangement.
Just seeing some of this stuff is painful, but seeing it day in day out seems like it could slowly erode your actual perception & sense of reality.
I know certain things get in your head. Like the first time I used VR, my dreams were influenced by that type of space.
Deep Dream in the White Cube (2016).
One of my GenAI experiments I begun 2015 with #DeepDream. Who can help me with artwork on the initial image (second one) I took in Museum for Modern Art Frankfurt before 2016. Is it Erwin Wurm, Robert Gober, Elmgreen & Dragset, Slominski?
#LLMs are an amazing #AI experiment with incredibly important research results, but the technology is far too immature to be allowed into production. Nobody should trust its output. We need more research and less development, we need to train LLMs under controlled conditions where we dissect every step in the training process and look at how it changes the patterns inside the artificial neural network.
Ever since #DeepDream and its #puppyslugs, the first AI generated images that got viral on the Internet, researchers have been tinkering with already existing arltificial neural networks to see what a single layer of neurons or even a single neuron does within the system, and what kinds of patterns occur within such a network when it does something, but that's not enough. Instead of making the models bigger and bigger, we need to train much smaller models much faster and much more often, comparing individual outcomes to one another. Since we are dealing with complex systems very much capable of chaotic behaviour, we must look at them not in the way we look at conventional engineering. The AI machinery may be entirely deterministic as far as the mathematics go, using pseudorandom numbers as the noise it needs in order to work, but even if we can reproduce the output by using the same random seed, we still cannot understand how it came to be. All we are doing at the moment is basically just Linear Algebra, but with gazillion dimensional tensors. Even if we trace the path each single bit of input signal takes through the model, we still don't understand how the model makes the output from the input, and that's because we aren't paying enough attention to how exactly the training actually works.
We've got all those cute but unreliable toys now. They may not be fit for production in most cases, but as soon as the LLM cult collapses and the AI bubble bursts, people will find out that much smaller models trained on manually curated data can actually be very useful for all kinds of specialised systems, even though we won't get any closer to #AGI. I think with our current hardware technology we won't even get close to actual human intelligence before the decline of the Industrial Age erodes our global industrial productive capacity to the point where computers become very rare and very expensive again. Our current digital computers are far too energy hungry and far too precise to run anything as complex and noise resistant as a human brain, we'd have to build something analogue and low-power. Something that doesn't compute with discrete numbers but with something like voltage or brightness that can take any value in between 0 and 1. Up to now, we haven't really tried to make analogue signal processing circuits really tiny because we have been using DSPs instead, but what if we tried to make very densely packed silicon chips out of them that mimic the signal pathway topology of a slice of brain? I'm pretty sure there are already people working on that somewhere, but with a pityful budget because all the "AI" funds go to bloody useless LLM chatbots.
When the #AIBubble bursts, there won't be much funding for AI research, but at least more of it will go to fields where actual progress can be made instead of putting it all into #MachineLearning. Machine learning is great, we have made some real progress in the last 20 years because of all the Internet data on which we could train our models, and also because of relatively cheap GPUs to do the heavy lifting, but now GPUs are expensive because of chatbot breeders and cryptobros, and the data on the Internet is far more AI output than anything else, and since AI can't tell AI and humans apart yet (if ever), we are at the point where there won't be any progress in machine learning without a lot of human labour. Even if some mousepad proles from Africa or Asia or Latin America do all the click work, write all the detailed descriptions for visual media and audio, pick out all the useless AI hallucinations that slipped into the proposed training data, it will make the process really, really expensive because this is something that takes a lot of time and can't be automated. So any completely new large scale machine learning models may be a thing of the past soon. Already existing models can be used, and with LoRAs, they can be taught some new tricks, but if we use whatever hype is left to learn as much as we can about the training process and what structures it builds inside the model, we will be able to build better models that can do more with less.
Pre #2020: #Factorizing Tools
These #AI wre #DeepLearning breakthroughs. #Word2Vec, #DeepDream and #AlphaGo solved novel, previously unsolvable, problems.
If you weren't in the field, you might not think these were AI, and #GPT 2 might have surprised you.
Deep Dream is turning 10 years old! This generative AI model by Alex Mordvintsev redefined everything. It revolutionized generative art, opened new ways for machine creativity. Read my essay on my experiences with Deep Dream since 2015. https://medium.com/merzazine/deep-dream-comes-true-eafb97df6cc5?sk=6d50ebb59584b487183385009ba50f54 #deepdream #aiart
Топ-10 бесплатных нейросетей для генерации изображений: лучшие AI генераторы 2025 года
Признайтесь, сколько раз вы хотели быстро накидать картинку для поста или презентации, но вместо этого застревали в редакторе или бесконечных поисках подходящего изображения в Google? А ведь как было бы здорово, если бы картинка, которая у вас в голове, внезапно просто появилась! Время — деньги, вдохновение — на паузе, и тут на помощь приходит AI. Нейросети могут генерировать всё, что угодно, включая самые безумные идеи. Больше не нужно тратить часы на поиски, когда за пару кликов можно увидеть то, что секунду назад было в мыслях. Кстати, заметили обложку с динозавром? Давайте будем звать его Рекс. Рекс –сам плод работы нейросети. Сегодня он станет главной звездой наших экспериментов. Но что будем делать? Помните я говорил о безумных идеях? Так вот, чтобы понять все возможности генерации, давайте дадим AI сложное задание. Отправим Рекси куда-нибудь в космос, например на Луну, пусть наденет скафандр и готовит барбекю на фоне Земли. Интересно? Тогда пристегивайтесь, мы отправляемся в мир генерации изображений. 1. Grok А теперь знакомьтесь с Grok — нейросетью от xAI и моим личным фаворитом в этом списке. Grok обитает прямо в интерфейсе X (ранее известном как Twitter), и использует Flux для генерации изображений. Справляется отлично. Заводите бесплатный аккаунт на X.com , жмите на кнопку «Grok» — и вы в игре! Хотите классный арт? Нет проблем! Но мы же здесь для экспериментов, верно? Вбиваем: «Нарисуй динозавра в скафандре, который жарит барбекю на Луне на фоне Земли» . И вот результат — все довольны! И мы, и наш динозавр Рекс!
https://habr.com/ru/companies/bothub/articles/881888/
#ai #ии_и_машинное_обучение #генерация_изображений #canva #microsoft #deepai #adobe_express #deepdream
Does anyone know the URL for the "observatory" website (I think that's what they called it) where one of the AI/DNN labs had analysed various machine vision models and built a map of all of the nodes.
You could click on each node and see the images (and sometimes text) that triggered it, and also images that were generated when they excited that node while clamping others (like Deep Dreams)
I can't remember who it was and can't find it.
@modean987 After SD 1.5 went public, all kinds of free image generator websites appeared, and I tried a lot of them, but my main platform is still #DeepDreamGenerator where I had been active since well before latent diffusion was a thing, when we only had #DeepDream and #DeepStyle. Then I discovered #Yodayo, a platform with over a hundred SD models now (mostly SD 1.5, a few SDXL), but almost all of them exclusively trained on anime, manga, and a few on Western comics and animation.
#aiart
@modean987 I've been using all kinds of AI tools for images ever since #DeepDream happened in the 2010s, that thing with the puppyslugs. You input one image, select which neuron layer of the model you want to sample, and how many iterations you want, and you get a new image that somehow grew out of the old one. A little later, Neural Style Transfer aka #DeepStyle made it possible to mix two images, one for content and one for style. I made enormous volumes of images with that. #aiart
https://deepdreamgenerator.com/ddream/xccrwbf6vbh Random Robot AI Art #ai #aiart #deepdream
Flying man in the dream...
Prompt:
Draw an ultra-realistic picture with holographic elements of a man flying in a beautiful place with high cliffs by the sea in his dream
#AIArtCommuity #AIArtwork
#aiart #art #ai #digitalart #generativeart #artificialintelligence #nft #aiartists #neuralart #contemporaryart #deepdream #artist #nftart #aiartist #abstract #digitalartist #midjourney #midjourneyart #dalle #openai #beautiful #sea #modernart
#DeepDream
Waiting for Utopia