THE FUTURE IS HERE

Are generative AI models hitting a wall?

OpenAI’s next flagship model, codename Orion, reportedly is a smaller increase in quality over GPT-4 than GPT-4 was over GPT-3 at launch: https://www.theinformation.com/articles/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows

We’ve heard this narrative before: the AI companies are running out of useful data so they’re experimenting with synthetic data.

Meanwhile, training is only becoming increasingly expensive while yielding not much better results.

Large language models are inevitably going through a period of diminishing returns.

Still, just because training-based scaling laws seem to indicate slowing GPT improvements doesn’t mean AI models in general can’t be improved.

The industry seems to be shifting to improving models after their initial training, which might yield a different type of scaling law.

Plus, combining LLMs with reasoning models sounds promising.

That’s probably why we don’t have a date for GPT-5, aside from maybe sometime in 2025: https://www.linkedin.com/posts/emilprotalinski_tech-companies-love-fruit-names-theres-activity-7218641176235503617-M9n3

OpenAI is still figuring out how to best scale the latest GPT gains.