

it’s an inside joke of the show where he presents himself as a “”“secret”“” furry that loves rat erotica. The content of the link is legit though.
Huh?
it’s an inside joke of the show where he presents himself as a “”“secret”“” furry that loves rat erotica. The content of the link is legit though.
Not enough for it to make results diverge. Randomness is added to avoid falling into local maximas in optimization. You should still end in the same global maxima. Models usualy run until their optimization converges.
As stated, if the randomness is big enough that multiple reruns end up with different weights aka optimized for different maximas, the randomization is trash. Anything worth their salt won’t have randomization big enough.
So, going back to my initial point, we need the training data to validate the weights. There are ways to check the performance of a model (quite literally, the same algorithm that is used to evaluate weights in training is them used to evaluate the trained weights post training) the performance should be identical up to a very small rounding error if a rerun with the same data and parameters is used.
Holy shit thanks I wasn’t getting it.
Hey, I have trained several models in pytorch, darknet, tensorflow.
With the same dataset and the same training parameters, the same final iteration of training actually does return the same weights. There’s no randomness unless they specifically add random layers and that’s not really a good idea with RNNs it wasn’t when I was working with them at least. In any case, weights should converge into a very similar point even if randomness is introduced or else the RNN is pretty much worthless.
The model is open, it’s not open source!
How is it so hard to understand? The complete source of the model is not open. It’s not a hard concept.
Sorry if I’m coming of as rude but I’m getting increasingly frustrated at having to explain a simple combination of two words that is pretty self explanatory.
The training data is NOT right there. If I can’t reproduce the results with the given data, the model is NOT open source.
The runner is open source, the model is not
The service uses both so calling their service open source gives a false impression to 99,99% of users that don’t know better.
The source OP is referring to is the training data what they used to compute those weights. Meaning, petabytes of text. Without that we don’t know which content theynused for training the model.
The running/training engines might be open source, the pretrained model isn’t and claiming otherwise is wrong.
Nothing wrong with it being this way, most commercial models operate the same way obviously. Just don’t claim that themselves is open source because a big part of it is that people can reproduce your training to verify that there’s no fowl play in the input data. We literally can’t. That’s it.
The running engine and the training engine are open source. The service that uses the model trained with the open source engine and runs it with the open source runner is not, because a biiiig big part of what makes AI work is the trained model, and a big part of the source of a trained model is training data.
When they say open source, 99.99% of the people will understand that everything is verifiable, and it just is not. This is misleading.
As others have stated, a big part of open source development is providing everything so that other users can get the exact same results. This has always been the case in open source ML development, people do provide links to their training data for reproducibility. This has been the case with most of the papers on natural language processing (overarching branch of llm) I have read in the past. Both code and training data are provided.
Example in the computer vision world, darknet and tool: https://github.com/AlexeyAB/darknet
This is the repo with the code to train and run the darknet models, and then they provide pretrained models, called yolo. They also provide links to the original dataset where the tool models were trained. THIS is open source.
What most people understand as deepseek is the app thauses their trained model, not the running or training engines.
This post mentions open source, not open source code, big distinction. The source of a trained model is part the training engine, and way bigger part the input data. We only have access to a fraction of that “source”. So the service isn’t open source.
Just to make clear, no LLM service is open source currently.
The engine is open source, the model is not.
The enumqtor is open source, the games it can run are not.
I don’t see how it’s so hard to understand.
They are saying that the model that the engine is running is open source because they released the model. That’s like saying that a game is open source because I released an emulator and the exscutable file. It’s just not true.
Furry != people in animal suits in public.
Another thing a server can easily access is the timestamp of messages. Even if that is somehow stored encrypted in the server, messages are sent in real time and the server can easily log those, so an e2e encryption chat service will at the very least have logs with IP and timestamps. This can’t really be avoided.
Well its when my yearly public local sports center subscription updates, I obviously pay.
My lazy ass sometimes doesn’t feel like moving the left hand so I just use the mouse.
I have drank strawberry beer, banana beer and coke beer in my erasmus in Germany, I don’t think they would even wince with your suggestion.
It’s like Spaniards and wine, we tend to mix it with anything really.
I tend to fry them with some minced onion, theyndont really need salt afterwards.
Remove the chicken and I’m in, that sounds delicious.
I wasn’t talking about situations with compromised accounts, I was talking about legitimate accounts that were created in a typical way being converted to a zero knowledge encryption method, I was aknowledging that it’s hard doing that conversion when a user might have several clients logged on (2 phones, 6 computers…).
My point was that if they have not put any motivation in the transition, they never will because the bigger the userbase, the harder for them to manage the transition. Also, I find that sad because they should have invested more effort in that instead of all the features we are getting, but whatever.
If you found the technical terms confusing, public/private keys are some sort of asymmetric “passwords” used in cryptography that secure messages, and shared keys would be symmetrical passwords. The theory between key exchanges and all around those protocols are taught in introductory courses to cryptography in bachelors and masters, and I’m sorry to say that I don’t have the energy to explain more but feel free to read about the terms if you feel like it.
If you however found it confusing because I write like crap, I’m sorry for potentially offending you with the above paragraph and I’ll blame my phone keyboard about it :)
Oh, absolutely! That’s maybe why the jokes feel so pure and not disrespectful.