• 1 Post
  • 217 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle

  • Artificial intellegence is self replicating, meaning if we feed it with whatever stupid queries we make, it will “get better” at the specifics and “create more versions”

    Disagree.

    In fact, there are signs that extensive “user preference” training is deep frying models, so they score better in settings like LM Arena but get worse at actual work. See: ChatGPT 5.2, Gemini 2.5 Experimental before it regressed at release, Mistral’s latest deepseek-arch release, Qwen3’s reduction in world knowledge vs 2.5, and so on.

    And also, they don’t train themselves, they don’t learn on the go; that’s all manually done.

    They aren’t stupid.

    No, but… The execs are drinking a lot of Kool-aid, at least going from their public statements and behavior. Zuckerburg, for example, has completely gutted his best AI team, the literal inventor of modern open LLM infrastructure, for a bunch of tech bros with egos bigger than their contributions. OpenAI keeps making desperate short-term decisions instead of (seemingly) investing in architectural experimentation, giving everyone an easy chance to catch them. Google and Meta are poisoning their absolute best data sources, and it’s already starting to bite Meta.

    Honestly, I don’t know what they’re thinking.


    If the bubble bursts, I don’t think it will be for a while.

    …I think the bubble will drag on for some time.

    But I’m a massive local LLM advocate, and I’m telling you: it’s a bubble. These transformers(ish) LLMs are useful tools, not human replacements that will scale infinitely. That last bit is a scam.








  • Maybe just maybe they will see it as a waist of money and ditch it just like Facebook’s metaverse or whatever it was.

    This is what I’m trying to tell you! The only way to do that is tell them it doesn’t work for the intended purpose: helping customers sell Amazon stuff. They don’t care about people messing around with the bot, that’s trivially discarded noise.

    Also I’m sure at this point all my conversion’s are being fed back in to train the next one.

    It is not. It is quickly classified (by a machine) and thrown out.

    If you want to fuck with the training set, get the bot to help you do something simple, then when it actually works, flag it as an error. Then cuss it out if you like. This either:

    • Pollutes the training set with a success as a “bad” response.

    • Creates a lot of work for data crunchers to look for these kind of “feedback lies.”

    And it’s probably the former.


  • Don’t tell it its wrong, leave feedback in a seperate box.

    Not in its chat, but with a feedback button.


    Let me emphasize: the LLM remembers nothing. Amazon does not care about an ‘adversarial’ response. All cussing it out possibly does is factor that into your Amazon ad profile, and not to your benefit.

    And if you tell the bot it did wrong, it does not care. It doesn’t factor into anything.

    But if you legitimately ask it to help you buy something, and it gets that wrong, and you leave dedicated feedback, that registers for Amazon. It tells them their chatbot isn’t working, but actually frustrating customers trying to use it to buy something. That’s how you tank the program.


  • You aren’t talking to AI, you’re talking to chatbots with no memory, nor ability to change their internal state; you don’t have to worry about that. Honestly its a waste of your keystrokes and brainpower, as you are shouting into a void.

    …If you want to attack it, try getting it to actually do something (like find me an item with X requirements), then give feedback that its wrong if theres a button for it. That does get registered.