#ICLR24

Harald KlinkeHxxxKxxx@det.social
2024-02-07

If you would like to learn more how it works: Guiding Instruction-based Image Editing via Multimodal Large Language Models. Check out the code repository for the ICLR'24 Spotlight paper by Tsu-Jui Fu, Wenze Hu, Xianzhi Du, William Yang Wang, Yinfei Yang, and Zhe Gan.
github.com/apple/ml-mgie
#ICLR24 #ImageEditing #MLLMs #AIResearch

A diagram illustrating the process of a multimodal large language model (MLLM) editing an image of a cabin in the woods to place it in a desert setting.
2024-02-05

Happy to share our paper:

Genie🧞: Achieving Human Parity
in Content-Grounded Datasets Generation

was accepted to #ICLR24

From your content
Genie creates content-grounded data
of magical quality ✨
Rivaling human-based datasets!

arxiv.org/abs/2401.14367
#data #NLP #nlproc #ML #machinelearning #llm #RAG a

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst