Stable Diffusion start-up releases open source language models



summary
Summary

British AI company Stability AI is known for its image AI Stable Diffusion. With StableLM, it is now launching a series of open-source language models.

StableLM-Alpha is available now in two sizes with 3 and 7 billion parameters. Larger models with 15 to 65 billion parameters are to follow.

The models are licensed under the Creative Commons CC BY-SA-4.0 license and can thus be used commercially, as long as Stability AI is referenced. The models are released for research purposes anyway.

StableLM, like many other open-source language models, is based on EleutherAI’s “The Pile” dataset, but in an “experimental” version that Stability AI says is three times larger than before, with 1.5 trillion tokens. The startup wants to provide details about the dataset “in due course.”

ad

Stability AI is currently in a legal battle with Getty Images over Stable Diffusion, for which it scraped images from the Getty Images database without explicit permission – perhaps that’s one reason for its reluctance to be transparent about the dataset, or perhaps the current competitive environment. Stability AI reportedly seeks new funding.

Few parameters, but good data

Despite the small number of parameters (3 and 7 billion), StableLM-Alpha shows “surprisingly good performance,” Stability AI writes. The quality of the language model results from the “richness of the dataset,” it adds, but it doesn’t publish any benchmarks.

The context window of StableLM-Alpha, ie how many sentence and word components (tokens) the language model can consider simultaneously for an answer, is 4096 tokens, which is at the level of a GPT-3-based ChatGPT.

The StableLM alpha models with 3 and 7 billion tokens are now available on Github. Models with 15, 30, and 65 billion parameters are supposed to follow, along with technical documentation and training parameters. A GPT-3 size model with 175 billion parameters is planned.

As a complement to StableLM-Alpha, Stability AI releases instruction models that are fine-tuned according to the Alpaca formula. Stability AI uses a combination of the five datasets behind alpaca, GPT4All, Dolly, ShareGPTand HH.

Recommendation

7B-Tuned model is available on HuggingFace.

Stability AI also announces a program for an RLHF-based open-source dataset specifically for AI assistants, which it plans to work on with partners such as the OpenAssistant community. With such a dataset, it could potentially fine-tune the StableLM-Alpha models to make them viable for commercial use. Currently, this isn’t the case because the training process for StableLM-Tuned-Alpha uses text generated by ChatGPT. Commercial use would violate OpenAI’s terms and conditions.

While Stable Diffusion was and is a milestone for image-generating open-source AI, Stability AI may have an uphill battle to achieve similar success with language models: There are now numerous open-source offerings, and the quality is constantly improving – the recently released OpenAssistant, for example, sets new quality standards for dialog-oriented open-source language models, and is constantly being improved.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top