#Autoencoder

2025-04-23

'The Effect of SGD Batch Size on Autoencoder Learning: Sparsity, Sharpness, and Feature Learning', by Nikhil Ghosh, Spencer Frei, Wooseok Ha, Bin Yu.

jmlr.org/papers/v26/23-1022.ht

#sgd #autoencoder #sparse

2025-04-09

UEBA в кибербезе: как профилирование поведения пользователей на основе Autoencoder помогает выявлять угрозы и аномалии

В современном мире количество атак растёт пропорционально количеству внедрений новых технологий, особенно когда технологии ещё недостаточно изучены. В последнее время атаки становятся всё более разнообразными, а методы их реализации — всё более изощрёнными. Дополнительные проблемы несут и методы искусственного интеллекта, которыми вооружаются специалисты RedTeam. В руках опытного специалиста эти инструменты становятся реальной угрозой безопасности потенциальных целей. Большинство средств информационной безопасности основаны на корреляционных или статистических методах, которые в современных реалиях часто оказываются неэффективными. Что же тогда остаётся специалистам BlueTeam?

habr.com/ru/companies/gaz-is/a

#газинформсервис #информационная_безопасность #ueba #поведенческая_аналитика #lstm #autoencoder #falco

2024-10-18

Автоэнкодеры простыми словами

Автоэнкодеры являются базовой техникой машинного обучения и искусственного интеллекта, на основе которой строятся более сложные модели, например, в диффузионных моделях, таких как Stable Diffusion. Что же такое автоэнкодер?

habr.com/ru/companies/raft/art

#autoencoder #автоэнкодер #машинное+обучение

2024-05-24

'Representation Learning via Manifold Flattening and Reconstruction', by Michael Psenka, Druv Pai, Vishal Raman, Shankar Sastry, Yi Ma.

jmlr.org/papers/v25/23-0615.ht

#flatnet #autoencoder #manifold

Amy Tabb 🇺🇦amytabb@hachyderm.io
2024-02-08

Compression of Structured Data with Autoencoders: Provable Benefit of Nonlinearities and Depth
Kevin Kögler, Alexander Shevchenko, Hamed Hassani, Marco Mondelli

abs: arxiv.org/abs/2402.05013
pdf: arxiv.org/pdf/2402.05013.pdf

#arXiv #ComputerVision #Autoencoder

Autoencoders are a prominent model in many empirical branches of machine
learning and lossy data compression. However, basic theoretical questions
remain unanswered even in a shallow two-layer setting. In particular, to what
degree does a shallow autoencoder capture the structure of the underlying data
distribution? For the prototypical case of the 1-bit compression of sparse
Gaussian data, we prove that gradient descent converges to a solution that
completely disregards the sparse structure of the input. Namely, the
performance of the algorithm is the same as if it was compressing a Gaussian
source - with no sparsity. For general data distributions, we give evidence of
a phase transition phenomenon in the shape of the gradient descent minimizer,
as a function of the data sparsity: below the critical sparsity level, the
minimizer is a rotation taken uniformly at random (just like in the compression
of non-sparse data); above the critical sparsity, the minimizer is the identity
(up to a permutation). Finally, by exploiting a connection with approximate
message passing algorithms, we show how to improve upon Gaussian performance
for the compression of sparse data: adding a denoising function to a shallow
architecture already reduces the loss provably, and a suitable multi-layer
decoder leads to a further improvement. We validate our findings on image
datasets, such as CIFAR-10 and MNIST.
New Submissions to TMLRtmlrsub@sigmoid.social
2023-08-20

A simple, efficient and scalable contrastive masked autoencoder for learning visual representations

openreview.net/forum?id=pjdxPt

#autoencoders #autoencoder #imagenet

2023-07-12

Extremely proud to present my group's pre-print paper executed by my PhD student Mr. Jonas Köhler on #earthquake #forecasting using #deeplearning. We pose earthquake forecasting as a classification problem and train a Neural Network to decide, whether a #timeseries of length greater than 2 years will end in an earthquake on the following day with magnitude greater than 5 or not. The study could be summarised in following three points: 1) We use spatio-temporal b value data, on which we train an #autoencoder to learn the normal seismic behaviour. 2) We then take the pixel by pixel reconstruction error as input for a Convolutional Dilated Network classifier, whose model output could serve for earthquake forecasting. 3) We develop a special progressive training method for this model to mimic real life use. #data #research #ai4good #ai4science #seismology
arxiv.org/abs/2307.01812

New Submissions to TMLRtmlrsub@sigmoid.social
2023-06-30

A Latent Diffusion Model for Protein Structure Generation

openreview.net/forum?id=8zzjem

#proteins #autoencoder #protein

New Submissions to TMLRtmlrsub@sigmoid.social
2023-05-18

CAE v2: Context Autoencoder with CLIP Latent Alignment

openreview.net/forum?id=f36LaK

#autoencoder #encoder #masked

Tiago F. R. Ribeirotiago_ribeiro
2023-03-19

"Autoencoder Image Interpolation by Shaping the Latent Space"

🔗: arxiv.org/abs/2008.01487#

Dan Stowelldanstowell
2023-01-09

"Diffusion language models" blog post by @sedielem - lots of illuminating details, e.g. the connection between diffusion noise levels and scale (frequency) of features benanne.github.io/2023/01/09/d

2022-12-28

'Cauchy–Schwarz Regularized Autoencoder', by Linh Tran, Maja Pantic, Marc Peter Deisenroth.

jmlr.org/papers/v23/21-0681.ht

#autoencoders #autoencoder #generative

2022-12-21

#arxivfeed :

"GD-VAEs: Geometric Dynamic Variational Autoencoders for Learning Nonlinear Dynamics and Dimension Reductions"
arxiv.org/abs/2206.05183

#MachineLearning #DeepLearning #Variational #Autoencoder #DynamicalSystems

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst