• 1 Post
  • 228 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle




  • Most unsustainable “logging land” is basically turned into grazing land. Brazil and the cut rainforests are a great example. But logging can be quite sustainable too: with some caveats, that can basically count as forest.

    Oil fields are tiny, and share lands with other projects. See: west Texas, with cattle and windmills on the same land as the wells.

    Parkland is often more “wild” than actual wild. Especially nature reserves.

    IDK about highway statistics, but they really don’t take up a lot of physical land. Though their effect of dividing wilds is certainly understated in the graph.

    IDK about mining either, but also it doesn’t seem like this would take up a ton of land. It’s really concentrated by necessity, and the worst environmental effects are usually related to pollutants or other knock-on effects.


    The one fishy thing to me is grazing land. In places like Africa, there are lots of tribes and other low tech herders, and if you walk around, it really feels like their unfenced areas straddle the line between wilds and grazing lands. It’s nothing like (say) west Texas with vast fields of clearly dedicated grazing land.


  • I really hate the political meme of “they’re taking away our meat!” It’s been drummed up pre-emptively, before these sorts of illustrations can possibly take hold.

    I saw this great documentary about a US Deep South native, a fried chicken lover, a CEO as white and conservative as you can get on a mission to develop the best plant-based chicken on Earth. This nut has frycooks in kitchens constantly testing it. And his pitch is awesome: it already tastes better, and if he could scale up, it’s cheaper, too. But anticompetitiveness in the global livestock industry, and PR smear campaigns, are apparently near insurmountable obstacles.


    …I hate all that.

    Truth doesn’t matter. Neither does practicality. It’s like we’re living in a cyberpunk novel already.








  • Artificial intellegence is self replicating, meaning if we feed it with whatever stupid queries we make, it will “get better” at the specifics and “create more versions”

    Disagree.

    In fact, there are signs that extensive “user preference” training is deep frying models, so they score better in settings like LM Arena but get worse at actual work. See: ChatGPT 5.2, Gemini 2.5 Experimental before it regressed at release, Mistral’s latest deepseek-arch release, Qwen3’s reduction in world knowledge vs 2.5, and so on.

    And also, they don’t train themselves, they don’t learn on the go; that’s all manually done.

    They aren’t stupid.

    No, but… The execs are drinking a lot of Kool-aid, at least going from their public statements and behavior. Zuckerburg, for example, has completely gutted his best AI team, the literal inventor of modern open LLM infrastructure, for a bunch of tech bros with egos bigger than their contributions. OpenAI keeps making desperate short-term decisions instead of (seemingly) investing in architectural experimentation, giving everyone an easy chance to catch them. Google and Meta are poisoning their absolute best data sources, and it’s already starting to bite Meta.

    Honestly, I don’t know what they’re thinking.


    If the bubble bursts, I don’t think it will be for a while.

    …I think the bubble will drag on for some time.

    But I’m a massive local LLM advocate, and I’m telling you: it’s a bubble. These transformers(ish) LLMs are useful tools, not human replacements that will scale infinitely. That last bit is a scam.








  • Maybe just maybe they will see it as a waist of money and ditch it just like Facebook’s metaverse or whatever it was.

    This is what I’m trying to tell you! The only way to do that is tell them it doesn’t work for the intended purpose: helping customers sell Amazon stuff. They don’t care about people messing around with the bot, that’s trivially discarded noise.

    Also I’m sure at this point all my conversion’s are being fed back in to train the next one.

    It is not. It is quickly classified (by a machine) and thrown out.

    If you want to fuck with the training set, get the bot to help you do something simple, then when it actually works, flag it as an error. Then cuss it out if you like. This either:

    • Pollutes the training set with a success as a “bad” response.

    • Creates a lot of work for data crunchers to look for these kind of “feedback lies.”

    And it’s probably the former.