#ImageNet

2025-04-07

Ведущий разработчик ChatGPT и его новый проект — Безопасный Сверхинтеллект

Многие знают об Илье Суцкевере только то, что он выдающийся учёный и программист, родился в СССР, соосновал OpenAI и входит в число тех, кто в 2023 году изгнал из компании менеджера Сэма Альтмана. А когда того вернули, Суцкевер уволился по собственному желанию в новый стартап Safe Superintelligence («Безопасный Сверхинтеллект»). Илья Суцкевер действительно организовал OpenAI вместе с Маском, Брокманом, Альтманом и другими единомышленниками, причём был главным техническим гением в компании. Ведущий учёный OpenAI сыграл ключевую роль в разработке ChatGPT и других продуктов. Сейчас Илье всего 38 лет — совсем немного для звезды мировой величины.

habr.com/ru/companies/ruvds/ar

#Илья_Суцкевер #Ilya_Sutskever #OpenAI #10x_engineer #AlexNet #Safe_Superintelligence #ImageNet #неокогнитрон #GPU #GPGPU #CUDA #компьютерное_зрение #LeNet #Nvidia_GTX 580 #DNNResearch #Google_Brain #Алекс_Крижевски #Джеффри_Хинтон #Seq2seq #TensorFlow #AlphaGo #Томаш_Миколов #Word2vec #fewshot_learning #машина_Больцмана #сверхинтеллект #GPT #ChatGPT #ruvds_статьи

2025-01-05

#ConvolutionalNeuralNetworks (#CNNs in short) are immensely useful for many #imageProcessing tasks and much more...

Yet you sometimes encounter some bits of code with little explanation. Have you ever wondered about the origins of the values for image normalization in #imagenet ?

  • Mean: [0.485, 0.456, 0.406] (for R, G and B channels respectively)
  • Std: [0.229, 0.224, 0.225]

Strangest to me is the need for a three-digits precision. Here, after finding the origin of these numbers for MNIST and ImageNet, I am testing if that precision is really important : guess what, it is not (so much) !

👉 if interested in more details, check-out laurentperrinet.github.io/scib

Accuracy of ResNet on ImageNet for different image normalization valuesstandard image normalization values from https://pytorch.org/hub/pytorch_vision_resnet/
Benjamin Carr, Ph.D. 👨🏻‍💻🧬BenjaminHCCarr@hachyderm.io
2024-11-16

How a stubborn #computerscientist accidentally launched the #deeplearning boom
"You’ve taken this idea way too far," a mentor told Prof. Fei-Fei Li, who was creating a new image #dataset that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories. Then in 2012, a team from Univ of Toronto trained a #neura network on #ImageNet, achieving unprecedented performance in image recognition, dubbed #AlexNet.
arstechnica.com/ai/2024/11/how #AI

2024-11-14

#AI heroic stories and underpaid labour:

"The project was saved when Li learned about Amazon Mechanical Turk, a crowdsourcing platform Amazon had launched a couple of years earlier. "

#Imagenet....How a stubborn computer scientist accidentally launched the deep learning boom - Ars Technica
arstechnica.com/ai/2024/11/how

2024-11-04

🚀 New #AI Research: Simplified Continuous-time Consistency Models (#sCM)

🔬 Key findings:
#OpenAI's new approach matches leading #diffusion models' quality using only 2 sampling steps
• 1.5B parameter model generates samples in 0.11 seconds on single #GPU
• Achieves ~50x wall-clock speedup compared to traditional methods
• Uses less than 10% of typical sampling compute while maintaining quality

🎯 Technical highlights:
• Simplifies theoretical formulation of continuous-time consistency models
• Successfully scaled to 1.5B parameters on #ImageNet at 512×512 resolution
• Demonstrates consistent performance scaling with teacher diffusion models
• Enables real-time generation potential for images, audio, and video

📄 Learn more: openai.com/index/simplifying-s

Stuart D NeilsonStuartDNeilson
2023-11-06

" is “promising” nothing. It is who are promising – or not promising. AI is a piece of software. It is made by people, deployed by people and by people... in terms of urgency, I’m more concerned about ameliorating the risks that are here and now [than by the risks of the techbro SkyNet singularity]."

— Fei-Fei Li, creator of , whose memoir "The Worlds I See" is out now.

theguardian.com/technology/202

2023-09-27

@lowd I remember when most ML applications were variations on #MNIST. And #Imagenet, but I only had enough computer at the time to play around with Mnist. But yea, even then "Recommendation Engines" were starting to be the first things anyone mentioned because it was low hanging fruit - something of immediately obvious commercial value with terrific training data and an easy task for deployment.

2023-09-27

Re-reading 'On the genealogy of machine learning datasets: A critical history of ImageNet' by @alexhanna. So clear the LLM debacle goes back to the start of the DL boom; it's data fetish, flat universalism, social illiteracy & contempt for workers journals.sagepub.com/doi/full/
#AI #datasets #Imagenet #resistingAI

Harald KlinkeHxxxKxxx@det.social
2023-09-25

Exploring ImageNet's influence in digital humanities and art curation:
The Curator's Machine project delves into how this dataset shapes relationships in the art world. By analyzing its absence of 'art' classification, lack of historical context, and texture vs. outline focus, this research bridges art history, coding, and digital humanities. 🎨🖥️
by @databasecultures

#DigitalHumanities #DigitalArtHistory #ImageNet
dahj.org/article/why-so-many-w

Published papers at TMLRtmlrpub@sigmoid.social
2023-09-05

A DNN Optimizer that Improves over AdaBelief by Suppression of the Adaptive Stepsize Range

Guoqiang Zhang, Kenta Niwa, W. Bastiaan Kleijn

Action editor: Rémi Flamary.

openreview.net/forum?id=VI2JjI

#optimizers #imagenet #optimizer

Published papers at TMLRtmlrpub@sigmoid.social
2023-09-04

Efficient Inference With Model Cascades

Luzian Lebovitz, Lukas Cavigelli, Michele Magno, Lorenz K Muller

Action editor: Yarin Gal.

openreview.net/forum?id=obB415

#imagenet #benchmark #models

New Submissions to TMLRtmlrsub@sigmoid.social
2023-08-20

A simple, efficient and scalable contrastive masked autoencoder for learning visual representations

openreview.net/forum?id=pjdxPt

#autoencoders #autoencoder #imagenet

Published papers at TMLRtmlrpub@sigmoid.social
2023-08-16

Learned Thresholds Token Merging and Pruning for Vision Transformers

Maxim Bonnaerens, Joni Dambre

Action editor: Mathieu Salzmann.

openreview.net/forum?id=WYKTCK

#imagenet #pruning #masking

Published papers at TMLRtmlrpub@sigmoid.social
2023-08-12

Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks

Shiyu Liu, Rohan Ghosh, John Chong Min Tan, Mehul Motani

Action editor: Mingsheng Long.

openreview.net/forum?id=nGW2Ho

#pruning #imagenet #subnetworks

Published papers at TMLRtmlrpub@sigmoid.social
2023-08-11

Foiling Explanations in Deep Neural Networks

Snir Vitrack Tamam, Raz Lapid, Moshe Sipper

Action editor: Jakub Tomczak.

openreview.net/forum?id=wvLQMH

#adversarial #imagenet #inception

New Submissions to TMLRtmlrsub@sigmoid.social
2023-08-04

Synthetic Data from Diffusion Models Improves ImageNet Classification

openreview.net/forum?id=DlRsox

#imagenet #inception #generative

2023-07-31

Can we please, as people who work with large public #datasets, start using torrents? I am just simply trying to find an old version of the #ImageNet Object Detection from Video dataset, and all of the links are broken! For multiple years! Another ImageNet dataset I’m downloading is downloading at 500KB/s.

People have clearly been looking for and using these datasets, and now I need to retrain something and I’m without them. We need to band together and start a torrent tracker for datasets so that we don’t need to rely on one website to download from. With proper permission from the dataset owners of course…

I’m so committed I might buy my own domain and start hosting a torrent tracker. Anyone interested?

#DataScience #DataEngineer #MachineLearning #BigData #torrents #torrenting #archiving #archivist

Published papers at TMLRtmlrpub@sigmoid.social
2023-07-24

Contrastive Attraction and Contrastive Repulsion for Representation Learning

Huangjie Zheng, Xu Chen, Jiangchao Yao et al.

Action editor: Yanwei Fu.

openreview.net/forum?id=f39UID

#softmax #representations #imagenet

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst