Giraffe is an open-source LLM with a context window of 32,000 tokens, making it useful for many applications in business contexts.
Large language models like GPT-4 show impressive capabilities but often have a limited context window, which limits their use in tasks where they would have to process dozens of pages. Variants such as GPT-4-32k or Anthropic’s Claude with a context window of 100,000 tokens provide a much larger “memory” and are therefore more powerful in such use cases.
Now, researchers have extended the context window of the open-source LLaMA model by up to 10 times using interpolation techniques that reach about 32,000 tokens. The resulting LLM, called Giraffe, comes in a 13 billion parameter version and has one of the largest context windows of any open-source LLM.
Open Source Giraffe provides insight into scaling context windows
Being open-source, the research also provides some important insights into the inner workings of LLMs and different scaling techniques for enlarging the context window. According to the Abacus.AI team, Liner scaling of position embeddings was the most effective at increasing context length, with others also having some effect.
They also found that accuracy on long context tasks decreased with increasing length, demonstrating the limitations of current techniques, and showed that perplexity, commonly used to measure LLM performance, alone is insufficient to measure long context performance, highlighting the need for custom testing.
More information and data are available on the project GitHub, with the Giraffe-v2-13b-32k model being hosted on Hugging Face.