

AFAIK no, but it takes awhile for everything to sink in, and hosts like John Oliver only have so much air time.
AFAIK no, but it takes awhile for everything to sink in, and hosts like John Oliver only have so much air time.
Not easy for most users on app-based mobile/tablet devices.
(Puts on tinfoil hat) His parent company, Discovery-Warner Bros, probably wouldn’t like that? And John Oliver makes his living through ad revenue.
They’ll just bury the settings, or remove them. Who’s going to stop them now?
Precisely.
And it’s a “hard” sci fi universe rooted in theoretically possible physics. No FTL or causality violations.
Would recommend Orion’s Arm for “theoretical” but fiction takes on structures like this:
https://www.orionsarm.com/eg-article/5067d430e6021
That article is not comprehensive either, their universe is quite expansive.
Two of my favorites may be “W-brains,” computing structures with very carefully arranged wormhole pairs serving as data buses to overcome the latency of communicating at such scale, and “neural stars,” another take which is a computational structure inside a neutron star sized volume/mass (again, to overcome latency issues).
There are much smaller megastructures too, depending on where in the timeline you are looking.
I feel like this is a microcosm of the internet.
There’s like zero trust in letting lifelong experts tell you what’s going on, and how to respond. And nodding your head. I guess people have always had their own takes, questionable sources and such for millenia, but it feels like we’ve passed a threshold.
The Fallout TV series is a rather explicit jab at Big Oil.
People were pretty upset about it too. Also, it was good.
I saw that Texas has an outbreak right now. Scary stuff.
Texan here, I legit missed this. I can’t believe I did; I was literally the first in line to get COVID Vaccines/Boosters. Seems Abbott hasn’t said anything, but some local news is.
https://www.dshs.texas.gov/news-alerts/health-alert-measles-outbreak-gaines-county-texas
Gaines County is in the middle of nowhere, but still.
Even giving them the benefit of the doubt, can’t we assume OpenAI is a massive target of foreign espionage? Haven’t they already had breaches… and that’s just what we know about.
I could see on prem LLMs being a thing for coding assistance, but wtf. This is not going through remote servers, right?
Honestly, most LLMs suck at the full 128K. Look up benchmarks like RULER.
In my personal tests over API, LLama 70B is bad out there. Qwen (and any fine tune based on Qwen Instruct, with maybe an exception or two) not only sucks, but is impractical past 32K once its internal rope scaling kicks in. Even GPT-4 is bad out there, with Gemini and some other very large models being the only usable ones I found.
So, ask yourself… Do you really need 128K? Because 32K-64K is a boatload of code with modern tokenizers, and that is perfectly doable on a single 24G GPU like a 3090 or 7900 XTX, and that’s where models actually perform well.
It’s expensive for video though.
In other words, I have a hard time seeing Pixelfed with a high quality “benign” TikTok algorithm. It’s already possible for music, but video data\analysis is just so voluminous that, without the profitable exploitation backing it, I don’t see how they’d pay for it.
Late to this post, but shoot for and AMD Strix Halo or Nvidia Digits mini PC.
Prompt processing is just too slow on Apple, and the Nvidia/AMD backends are so much faster with long context.
Otherwise, your only sane option for 128K context in a server with a bunch of big GPUs.
Also… what model are you trying to use? You can fit Qwen coder 32B with like 70K context on a single 3090, but honestly its not good above 32K tokens anyway.
The context window is indeed the LLM’s memory.
…But its also muddy.
Many LLMs get ‘dumber’ and less attentive as their context windows grow, and OpenAI’s models just happen to be one of these. It’s awful close to the full 128K, even with the full GPT-4. Mistral models are also really bad at long context understanding while, conversely, I find that Google Gemini and Qwen 2.5 are really good close to their limits.
There are attempts to try and measure this performance objectively, like: https://github.com/NVIDIA/RULER
It’s still ephemeral, chats don’t change the underlying language model, but yes it’s interesting.
I know it’s a meme, but the idea that transformers models ‘remember’ anything is a common misconception.
They have zero memory. When you submit a prompt, it feeds your entire chat history as one big prompt and… forgets it immediately, with no impact on the model itself. It’s like its frozen in time, and copied, unfrozen, and thrown away every time it answers.
That’s what I don’t get. If the proton CEO was actually raging MAGA, the last thing he should do, strategically, is stoke fires by stirring this up. That’s business 101.
…He must want conservative’s ears for some kind of policy issue, maybe to the detriment of Proton’s competitors. But what?
Standing up for the little guy. Huh. Is that why billionaires and CEO are throwing literal tens of millions at Trump? Why he staffed his cabinet with billionaires? Why the center of his policy is tax cuts for the giga wealthy, at the expense of everyone else and the national debt, at a time where wealth inequality is literally tearing the country apart?
https://www.axios.com/2025/01/15/trump-windfall-fundraising-500-million
https://www.axios.com/2024/12/09/trump-wealth-cabinet-politicians-billionaires
These are objective, public facts. Like, I’m way more conservative than Lemmy’s center and willing acknowledge any good Trump does, but what reality is this guy living in? Who is this statement for? Who the heck does he think is using Proton services? He just pissed off his employees and customers for… What?
Still, they’re about the size of SODIMMs and relatively flat.
It would be iffy for the Max I suppose.
Yeah, I don’t even know what the ostensible excuse is for their SSDs. Keeping the laptop knife thin, I guess?
This was kind of justified for the RAM.
Packaging LPDDR smartphone-style like Apple does makes the traces much shorter, which lets the RAM be faster and lower power. DDR5-5600 DIMMs in “regular” laptops are literally electrically maxed out, and power hogs because they run at crazy voltages for the speed. I would think that much voltage would degrade the CPU too.
Fortunately LPCAMMS solve this!
And Apple is totally going to use them since they have no technical excuse anymore… right?
RIGHT!?
“Normal” people don’t use Facebook through the browser. Heck, I know functional, working adults, and reasonably smart kids, that don’t really understand the concept of a browser/URLs and just do everything through apps, bar the bare minimum for work.