OpenAI Slashes Prices and Boosts Rate Limits

2 min

In a strategic move to make AI more accessible and scalable, OpenAI has rolled out a sweeping price reduction across its suite of AI models, alongside a generous increase in rate limits for GPT-4 customers.

Significant Price Drops: A Boon for Developers

Leading the charge, GPT-4 Turbo’s input tokens have been marked down by a striking 300%, priced at $0.01, and output tokens see a 200% decrease, now at $0.03. This price efficiency extends to GPT-3.5 Turbo, which offers input tokens at $0.001 – a 300% decrease – and output tokens at $0.002, down by 200%. GPT-3.5 Turbo’s 4K model users also reap a benefit with a 33% reduction on input tokens.

For those utilizing fine-tuning services, OpenAI introduces a cost-effective approach. GPT-3.5 Turbo 4K model input tokens have plummeted by 400%, now at $0.003, and output tokens by approximately 270%, priced at $0.006. This pricing strategy also includes the new fine-tuned gpt-3.5-turbo-0613 models, ensuring that enhanced AI capabilities are within a more economical reach.

Older Models vs. New Models: A Comparative Breakdown

Older models New models
GPT-4 Turbo GPT-4 8K Input: $0.03 Output: $0.06 GPT-4 32K Input: $0.06 Output: $0.012 GPT-4 Turbo 128K Input: $0.01 Output: $0.03
GPT-3.5 Turbo GPT-3.5 Turbo 4K Input: $0.0015 Output: $0.002 GPT-3.5 Turbo 16K Input: $0.003 Output: $0.004 GPT-3.5 Turbo 16K Input: $0.001 Output: $0.002
GPT-3.5 Turbo fine-tuning GPT-3.5 Turbo 4K fine-tuning Training: $0.008 Input: $0.012 Output: $0.016 GPT-3.5 Turbo 4K and 16K fine-tuning Training: $0.008 Input: $0.003 Output: $0.006

As part of the restructuring, OpenAI contrasts previous pricing with the new, more competitive rates. The GPT-4 Turbo, originally priced at $0.03 for input and $0.06 for output, is now superseded by the GPT-4 Turbo 128K, which not only enlarges the context window to 128K but also cuts costs down to $0.01 for input and $0.03 for output. This scale of reduction is replicated across the board, including the GPT-3.5 Turbo models, which now favor a 16K default context window at a fraction of the earlier price.

Doubling Down on Rate Limits

To bolster application scalability, OpenAI doubles the tokens per minute limit for all GPT-4 paying customers. This upgrade is visible on the rate limit page, and developers can now request usage limit increases via account settings. OpenAI’s introduction of usage tiers ensures transparency and predictability in how usage limits will automatically scale, aligning with the developers’ growing needs.

These initiatives by OpenAI signify a shift towards democratizing AI development. By slashing costs and expanding rate limits, OpenAI is not only fuelling innovation but also ensuring that the developers’ community is equipped to drive growth and efficiency in AI-powered solutions.

Feature Older Model Pricing New Model Pricing
GPT-4 Turbo (Input) $0.03 $0.01
GPT-4 Turbo (Output) $0.06 $0.03
GPT-3.5 Turbo 4K (Input) $0.0015 $0.001
GPT-3.5 Turbo 4K (Output) $0.003 $0.002
GPT-3.5 Turbo 16K (Input) $0.003 $0.001
GPT-3.5 Turbo 16K (Output) $0.004 $0.002
GPT-3.5 Turbo Fine-tuning 4K (Input) $0.012 $0.003
GPT-3.5 Turbo Fine-tuning 4K (Output) $0.016 $0.006

This chart reflects the significant price reduction for input and output tokens across various models and configurations.

A Competitive Edge

This evolution in pricing and capacity reflects OpenAI’s commitment to providing developers with robust, scalable, and cost-effective AI tools. With such enhancements, OpenAI positions itself as a leader in the AI sphere, enabling developers to innovate, optimize, and push the boundaries of what’s possible with AI technology.

Like it? Share with your friends!


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Send this to a friend
Hi, this may be interesting you: OpenAI Slashes Prices and Boosts Rate Limits! This is the link: