Artificial intellegence is self replicating, meaning if we feed it with whatever stupid queries we make, it will “get better” at the specifics and “create more versions”
Disagree.
In fact, there are signs that extensive “user preference” training is deep frying models, so they score better in settings like LM Arena but get worse at actual work. See: ChatGPT 5.2, Gemini 2.5 Experimental before it regressed at release, Mistral’s latest deepseek-arch release, Qwen3’s reduction in world knowledge vs 2.5, and so on.
And also, they don’t train themselves, they don’t learn on the go; that’s all manually done.
They aren’t stupid.
No, but… The execs are drinking a lot of Kool-aid, at least going from their public statements and behavior. Zuckerburg, for example, has completely gutted his best AI team, the literal inventor of modern open LLM infrastructure, for a bunch of tech bros with egos bigger than their contributions. OpenAI keeps making desperate short-term decisions instead of (seemingly) investing in architectural experimentation, giving everyone an easy chance to catch them. Google and Meta are poisoning their absolute best data sources, and it’s already starting to bite Meta.
Honestly, I don’t know what they’re thinking.
If the bubble bursts, I don’t think it will be for a while.
…I think the bubble will drag on for some time.
But I’m a massive local LLM advocate, and I’m telling you: it’s a bubble. These transformers(ish) LLMs are useful tools, not human replacements that will scale infinitely. That last bit is a scam.




Price!
We can argue about it all we want, but basically everything hinges on its street price.
If it’s cheap, all those critiques are irrelevant.
Expensive? “It’s cute, I like Steam, I like how it mostly works OOTB,” gets real niche, real quick.