#BigCode is an open scientific collaboration working on responsible training of large language models for coding applications.
In this organization you can find the artefacts of this collaboration:
π #StarCoder, a state-of-the-art language model for code,
π The #Stack, the largest available pretraining dataset with perimssive code, and π #SantaCoder, a 1.1B parameter model for code.
#StarCoder is a 15.5B parameters language model for code trained for 1T tokens on 80+ programming languages.
It uses MQA for efficient generation, has 8,192 tokens context window and can do fill-in-the-middle.
Chat with StarCoder here: https://huggingface.co/chat/?model=bigcode/starcoder