OpenAI announces GPT-4 Turbo, assistants, and new API features



summary
Summary

At its developer conference, OpenAI announced GPT-4 Turbo, a cheaper, faster and smarter GPT-4 model. Developers get plenty of new API features at a much lower cost.

The new GPT-4 Turbo model is now available as a preview via the OpenAI API and directly in ChatGPT. According to OpenAI CEO Sam Altman, GPT-4 Turbo is “much faster” and “smarter”.

The release of Turbo also explains the rumors about an updated ChatGPT training date: GPT-4 Turbo is up-to-date until April 2023. The original ChatGPT only had knowledge until September 2021. Altman said that OpenAI plans to update the model more regularly in the future.

Probably the biggest highlight for developers is the significant price reduction that comes with GPT-4 Turbo: input tokens (text processing) for Turbo are three times cheaper, and output tokens (text generation) are two times cheaper.

Ad

Ad

The new Turbo model costs $0.01 per 1000 tokens compared to $0.03 for GPT-4 for input tokens and $0.03 for output tokens compared to $0.06 for GPT-4. It’s also much cheaper than GPT-4 32K, even though it has a four times larger context window (see below).

Image: OpenAI

Another highlight for developers: OpenAI is extending the GPT-4 Turbo API to include image processing, DALL-E 3 integration, and text-to-speech. The “gpt-4-vision-preview” model can analyze and generate images and create human-like speech from text.


OpenAI is also working on an experimental GPT-4 tuning program and a custom model program for organizations with large proprietary datasets. GPT-3.5 fine-tuning will be extended to the 16K model.

GPT-4 Turbo has much more attention

Probably the most important technical change is an increase of the so-called context window, i.e. the number of words that GPT-4 Turbo can process at once and take into account when generating output. Previously, the context window was a maximum of 32,000 tokens. GPT-4 Turbo has 128,000 tokens.

This is the equivalent of up to 100,000 words or 300 pages in a standard book, according to Altman. He also said that the 128K-GPT-4 Turbo model is “much more accurate” in terms of overall context.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top