#SelfSupervisedLearning

2024-03-19

@joss Happy to share this mini-paper and library I'm co-authoring.
Thanks to Federico, Andrea, Paolo and Manfredo, unfortunately none of them here (yet).
We're working on #deeplearning applications to #neuroscience and #EEG is very different from the data Big Tech usually approaches, so results and models are quite different there...
But we believe #selfsupervisedlearning is a great idea and we'd like for researchers to come play with it 👨🏾‍💻🧠

2024-01-12

The preprint of our lab's library for #selfsupervisedlearning on #eeg data is out!
Check it at arxiv.org/abs/2401.05405

The repo (under review with the preprint for the amazing @joss ) is at github.com/MedMaxLab/selfEEG

If you want to try deep learning and EEG, if you have lots of data, but supervised learning is difficult or ineffective for your target task, you might want to experiment with self-supervised learning as popularized for #transformers and vision models!
Techniques such as MoCo, SimCLR are already implemented, and eeg augmentations can be used and further customized. If you do not know how to come up with architectures, don't worry! A model zoo is there 👨🏾‍💻🧠

2023-12-21

We're submitting a work to @joss, it is about #eeg and #selfsupervisedlearning #deeplearning

Are #neuro people interested in trying and using it? Maybe pointing to the best experts to review it even?
Let us know! 🧠👨🏾‍💻👩🏼‍💻🧑‍💻

#opensource #openscience

2023-11-17

Update on my joint work with @JulianTachella on "#Learning to Reconstruct Signals From Binary Measurements" on arXiv. #RandomProjection #onebit #SelfSupervisedLearning arxiv.org/abs/2303.08691

We brought several improvements on the proofs and the bounds allowing us to determine from how many binarized (random) projections, only, one can learn, up to a controlled identification error, a low-complexity space (with small box dimemsion). Moreover, a practical #selfsupervised scheme, SSBM, run over real datasets of images, enables to learn a reconstruction algorithm from those same binary observations (without access to the original images and on par with supervised alternatives), implicitly confirming the encoding of a good estimate of the image set.

Victoria Stuart 🇨🇦 🏳️‍⚧️persagen
2023-08-17

LLM Self Defense: By Self Examination LLMs Know They Are Being Tricked
arxiv.org/abs/2308.07308

* LLM can generate harmful content in response to user prompts
* even aligned language models are susceptible to adversarial attacks that bypass restrictions on generating harmful text
* simple approach to defending against these attacks by having LLM filter its own responses

Figure 1 in:

LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked

Image caption.

LLM detects its own harmful outputs by self examination. An LLM could be subjected to a nefarious prompt. If an LLM responds to a user with harmful content, it is in fact possible to filter out this content using an LLM by passing the potentially harmful passage as context into another inference run of the LLM with an instruction specifying how to detect harmful text.
2023-07-24

From Julián Tachella @JulianTachella, posted on "Chi":

📰""Learning to reconstruct signals from binary measurements alone"📰

We present theory + a #selfsupervised approach for learning to reconstruct incomplete (!) and binary (!) measurements using the binary data itself. See the first figure and its alt-text.

arxiv.org/abs/2303.08691
with @lowrankjack
---

The theory characterizes

- the best approximation of a set of signals from incomplete binary observations
- its sample complexity
- complements existing theory for signal recovery from binary measurements

See the third figure and its alt-text.
---

The proposed self-supervised algorithm obtains performances on par with supervised learning and outperforms standard reconstruction techniques (such as binary iterative hard thresholding)

See the second figure and its alt-text.

---

Code based on the deepinverse library is available at github.com/tachella/ssbm

Check out the paper for more details!

#SelfSupervisedLearning #CompressiveSensing #Quantization #InverseProblem #1bitcamera

On this figure, a description of a self-supervised method for learning an image reconstruction algorithm (taking as input its binary observations) when one has access only to binary observations of many images belonging to the same, low complexity signal set. The figure shows a sensing device on the left acquiring numerous binary observations that are feeding either a linear inversion method on a top branch (such as a pseudo inverse associated with the sensing model before binarisation) with poor image reconstruction quality shown on the top right of the figure, or, on a bottom branch, a neural network model learned from binary observations alone, by promoting consistency with the binary sensing model (more information in the paper). This second branch achieves higher quality in the estimated images, as shown on the bottom right of the figure.On this figure, a table of images with 5 rows and 10 columns showing 10 images of the FashionMnist dataset (on the last row) reconstructed from their binary observations for 4 different algorithms, one per row, for 10 images of the dataset (such as a shoe, a shirt, ...). The 4 algorithms are: linear inversion, binary iterative hard thresholding (or BIHT, proposed in one bit compressive sensing), the proposed method using self-supervision, and a supervised method having access to the initial images to learn the reconstruction method. The proposed approach provides better quality than BIHT with a wavelet prior and the linear approach, and it is close to the image quality of the supervised method.On this figure, a description of a self-supervised method for learning an image reconstruction algorithm (taking as input its binary observations) when one has access only to binary observations of many images belonging to the same, low complexity signal set. The figure shows a sensing device on the left acquiring numerous binary observations that are feeding either a linear inversion method on a top branch (such as a pseudo inverse associated with the sensing model before binarisation) with poor image reconstruction quality shown on the top right of the figure, or, on a bottom branch, a neural network model learned from binary observations alone, by promoting consistency with the binary sensing model (more information in the paper). This second branch achieves higher quality in the estimated images, as shown on the bottom right of the figure.
Mattia Rigottimatrig
2023-04-25

A Cookbook of Self-Supervised Learning

Comprehensive review on Self-Supervised Learning ("the dark matter of intelligence") with focus on vision tasks and lowering the high barrier to entry.

📰 arxiv.org/abs/2304.12210

Author list of the linked paper
Luis Pedro Coelholuispedro@mstdn.science
2022-11-06

We recently released SemiBin v1.3

This is a binning tool for #metagenomics based on #DeepLearning

In v1.3, we introduce the possibility of using #SelfSupervisedLearning which does not require what was the most computationally intensive step in previous versions: taxonomically annotating contigs with mmseqs2

For backwards compatibility (and also because we are still testing it out) the tool uses the published algorithm by default, but this new modality is available

semibin.readthedocs.io/en/late

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst