Shit just works as usual
Shit just works as usual
AppleTV is smooth but as others pointed out you shouldn’t trust their claims that they’re private. They’re probably more private than e.g. Google out of the box, but not an actual privacy company.
Side note: in case you’re an Apple user and you weren’t aware of this, you can make it a bit better by obtaining your private key so that you have actual E2EE and you can add hardware keys for 2FA which makes it more secure. Of course this should be the bare minimum but it’s nice that they support these things out of the box.
Regarding metadata: https://support.apple.com/en-us/102651#%3A~%3Atext=This+metadata+is+always+encrypted%2CAdvanced+Data+Protection+is+enabled.
Maybe it’s worth checking if you’re okay with what kind of metadata they’re processing.
https://hotio.dev/containers/qbittorrent/
Why don’t you use the hotio container? That already has it baked in
Don’t ever mention Winnie the Pooh
No porn and drugs but „free speech“? Yeah right, no thanks. If my account on mastodon gets banned on an instance I go somewhere else.
Of course if fediverse becomes too centralized the couple instances left might just defederate from everyone else, but OTOH what protects me from a couple individuals downvoting me into oblivion on Bastyon?
They’re both decentralized in their own way but communities have to fight against malicious actors that attack the decentralization.
Typical politician, identifies the problem only to draw the absolute wrong conclusion.
Thanks for the reply, still reading here. Yeah thanks to the comments and reading some benchmarks I abandoned the idea of getting an Apple, it’s just too slow.
I was hoping to test Qwen 32B or llama 70b for running longer contexts, hence the apple seemed appealing.
Congrats on being that guy
You’re aware that there’s the OpenAI API library right? https://github.com/openai/openai-python
It’s really nothing fancy especially on Lemmy where like 99% of people are software engineers…
Are you drunk?
Yeah I found some stats now and indeed you’re gonna wait like an hour to process if you throw like 80-100k token into a powerful model. With APIs that kinda works instantly, not surprising but just to give a comparison. Bummer.
Thanks! Hadn’t thought of YouTube at all but it’s super helpful. I guess that’ll help me decide if the extra Ram is worth it considering that inference will be much slower if I don’t go NVIDIA.
Yeah I was thinking about running something like Code Qwen 72B which apparently requires 145GB Ram to run the full model. But if it’s super slow especially with large context and I can only run small models at acceptable speed anyway it may be worth going NVIDIA alone for CUDA.
Meh, ofc I don’t.
Thanks, that’s very helpful! Will look into that type of build
I understand what you’re saying but I’m coming to this community because I like having more input, hear about the experience of others and potentially learn about things I didn’t know about. I wouldn’t ask specifically in this community if I wouldn’t want to optimize my setup as much as I can.
Interesting, is there any kind of model you could run at reasonable speed?
I guess over time it could amortize but if the usability sucks that may make it not worth it. OTOH really don’t want to send my data to any company.
I’d honestly be open for that but would an AMD setup not take up a lot of space and consume lots of power / be loud?
It seems like in terms of price & speed, the Macs suck compared to other options, but if you don’t have a lot of space and don’t want to hear an airplane engine constantly I’m wondering if there are options.
Yeah the VRAM of Mac M series is very attractive for running models at full context length and the memory bandwidth is quite good for token generation compared to the price, power consumption and heat generation of NVidia GPUs.
Since I’ll have to put this in my kitchen/living room that’d be a big plus but idk how well prompt processing would work if I send over like 80k tokens.
You can drploy a Cloudflare worker that exposes an APi endpoint with an SQLite DB completely for free and without doing any maintenance. I don’t think the DB is encrypted , so it wouldn’t be my first choice if privacy is a concern. There’s a bit of a learning curve with all the UI bloat but once you figured it out it’s a very hassle free solution.