News Llama 4 – But Did Meta Just Cheat?

rushi1607

Contributor
So, LLaMA 4 is finally released and… Some people are saying it is incredibly impressive. There are also discussions that “Meta might’ve cheated a little,” with the claim that it was trained on heavily sanitized GPT output data, testing data for the benchmarks.

Moreover the smaller model SCOUT has 10 million token context length? That's stupidly outrageous. Makes me question whether we are entering a period when “open” models won’t actually be open.

What does everyone think? And how valuable is a wide context window for completing real work?
 
I was incredibly disappointed with Llama 4 - especially moreso when Llama 3 (and its versions were groundbreaking releases).
I was hoping to make a proper RAG agent - having been using Phi for so long despite newer launches.
 
  • Like
Reactions: chungus