#Refiners

2024-02-04

Now again #LLMs: if you do not want to train your own #ai foundation model, you can patch it with so-called #adapters. Benjamin Trim talked about their own open-source adapter micro framework: #Refiners work on top of #PyTorch and use declarative layers to patch models, context API to store state. #fosdem2024

Fosdem session on Refiners, an adapter micro framework for PyTorch

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst