• 1 Post
  • 62 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle






  • I feel like this is a microcosm of the internet.

    There’s like zero trust in letting lifelong experts tell you what’s going on, and how to respond. And nodding your head. I guess people have always had their own takes, questionable sources and such for millenia, but it feels like we’ve passed a threshold.





  • Honestly, most LLMs suck at the full 128K. Look up benchmarks like RULER.

    In my personal tests over API, LLama 70B is bad out there. Qwen (and any fine tune based on Qwen Instruct, with maybe an exception or two) not only sucks, but is impractical past 32K once its internal rope scaling kicks in. Even GPT-4 is bad out there, with Gemini and some other very large models being the only usable ones I found.

    So, ask yourself… Do you really need 128K? Because 32K-64K is a boatload of code with modern tokenizers, and that is perfectly doable on a single 24G GPU like a 3090 or 7900 XTX, and that’s where models actually perform well.


  • It’s expensive for video though.

    In other words, I have a hard time seeing Pixelfed with a high quality “benign” TikTok algorithm. It’s already possible for music, but video data\analysis is just so voluminous that, without the profitable exploitation backing it, I don’t see how they’d pay for it.


  • Late to this post, but shoot for and AMD Strix Halo or Nvidia Digits mini PC.

    Prompt processing is just too slow on Apple, and the Nvidia/AMD backends are so much faster with long context.

    Otherwise, your only sane option for 128K context in a server with a bunch of big GPUs.

    Also… what model are you trying to use? You can fit Qwen coder 32B with like 70K context on a single 3090, but honestly its not good above 32K tokens anyway.


  • brucethemoose@lemmy.worldtomemes@lemmy.world...
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 months ago

    The context window is indeed the LLM’s memory.

    …But its also muddy.

    Many LLMs get ‘dumber’ and less attentive as their context windows grow, and OpenAI’s models just happen to be one of these. It’s awful close to the full 128K, even with the full GPT-4. Mistral models are also really bad at long context understanding while, conversely, I find that Google Gemini and Qwen 2.5 are really good close to their limits.

    There are attempts to try and measure this performance objectively, like: https://github.com/NVIDIA/RULER



  • brucethemoose@lemmy.worldtomemes@lemmy.world...
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    edit-2
    2 months ago

    I know it’s a meme, but the idea that transformers models ‘remember’ anything is a common misconception.

    They have zero memory. When you submit a prompt, it feeds your entire chat history as one big prompt and… forgets it immediately, with no impact on the model itself. It’s like its frozen in time, and copied, unfrozen, and thrown away every time it answers.