Arthur Besse
cultural reviewer and dabbler in stylistic premonitions
- 8 Posts
- 43 Comments
were you careful to be sure to get the parts that have the key’s name and email address?
It should be if there is chunks missing its unusable. At least thats my thinking, since gpg is usually a binary and ascii armor makes it human readable. As long as a person cannot guess the blacked out parts, there shouldnt be any data.
you are mistaken. A PGP key is a binary structure which includes the metadata. PGP’s “ascii-armor” means base64-encoding that binary structure (and putting the BEGIN and END header lines around it). One can decode fragments of a base64-encoded string without having the whole thing. To confirm this, you can use a tool like
xxd
(orhexdump
) - try pasting half of your ascii-armored key in tobase64 -d | xxd
(and hit enter and ctrl-D to terminate the input) and you will see the binary structure as hex and ascii - including the key metadata. i think either half will do, as PGP keys typically have their metadata in there at least twice.just for fun I OCR’d your photo and decoded a few fragments of it; after trying different permutations of uppercase “i” and lowercase “L” due to them looking the same in that font, I decoded what appears to be a name… i can DM it to you if you want me to prove it 😀
how did you choose which areas to redact? were you careful to be sure to get the parts that have the key’s name and email address?
TLDR: this is way more broken than I initially realized
To clarify a few things:
-No JavaScript is sent after the file metadata is submitted
So, when i wrote “downloaders send the filename to the server prior to the server sending them the javascript” in my first comment, I hadn’t looked closely enough - I had just uploaded a file and saw that the download link included the filename in the query part of the URL (the part between the ? and the #). This is the first thing that a user sends when downloading, before the server serves the javascript, so, the server clearly can decide to serve malicious javascript or not based on the filename (as well as the user’s IP).
However, looking again now, I see it is actually much worse - you are sending the password in the URL query too! So, there is no need to ever serve malicious javascript because currently the password is always being sent to the server.
As I said before, the way other similar sites do this is by including the key in the URL fragment which is not sent to the server (unless the javascript decides to send it). I stopped reading when I saw the filename was sent to the server and didn’t realize you were actually including the password as a query parameter too!
😱
The rest of this reply was written when I was under the mistaken assumption that the user needed to type in the password.
That’s a fundamental limitation of browser-delivered JavaScript, and I fully acknowledge it.
Do you acknowledge it anywhere other than in your reply to me here?
This post encouraging people to rely on your service says “That means even I, the creator, can’t decrypt or access the files.” To acknowledge the limitations of browser-based e2ee I think you would actually need to say something like “That means even I, the creator, can’t decrypt or access the files (unless I serve a modified version of the code to some users sometimes, which I technically could very easily do and it is extremely unlikely that it would ever be detected because there is no mechanism in browsers to ensure that the javascript people are running is always the same code that auditors could/would ever audit).”
The text on your website also does not acknowledge the flawed paradigm in any way.
This page says "Even if someone compromised the server, they’d find only encrypted files with no keys attached — which makes the data unreadable and meaningless to attackers. To acknowledge the problem here this sentence would need to say approximately the same as what I posted above, except replacing “unless I serve” with “unless the person who compromised it serves”. That page goes on to say that “Journalists and whistleblowers sharing sensitive information securely” are among the people who this service is intended for.
The server still being able to serve malicious JS is a valid and well-known concern.
Do you think it is actually well understood by most people who would consider relying on the confidentiality provided by your service?
Again, I’m sorry to be discouraging here, but: I think you should
drastically re-frame what you’re offering to inform people that it is best-effort and the confidentiality provided is not actually something to be relied upon alone.The front page currently says it offers “End-to-end encryption for complete security”. If someone wants/needs to encrypt files so that a website operator cannot see the contents, then doing so using software ephemerally delivered from that same website is not sufficient: they should encrypt the file first using a non-web-based tool.update: actually you should take the site down, at least until you make it stop sending the key to the server. I am deleting your post now; feel free to PM me if you want to discuss this any further.
Btw, DeadDrop was the original name of Aaron Swartz’ software which later became SecureDrop.
it’s zero-knowledge encryption. That means even I, the creator, can’t decrypt or access the files.
I’m sorry to say… this is not quite true. You (or your web host, or a MITM adversary in possession of certificate authority key) can replace the source code at any time - and can do so on a per-user basis, targeting specific IP addresses - to make it exfiltrate the secret key from the uploader or downloader.
Anyone can audit the code you’ve published, but it is very difficult to be sure that the code one has audited is the same as the code that is being run each time one is using someone else’s website.
This website has a rather harsh description of the problem: https://www.devever.net/~hl/webcrypto … which concludes that all web-based cryptography like this is fundamentally snake oil.
Aside from the entire paradigm of doing end-to-end encryption using javascript that is re-delivered by a webserver at each use being fundamentally flawed, there are a few other problems with your design:
- allowing users to choose a password and using it as the key means that most users’ keys can be easily brute-forced. (Since users need to copy+paste a URL anyway, it would make more sense to require them to transmit a high-entropy key along with it.)
- the filenames are visible to the server
- downloaders send the filename to the server prior to the server sending them the javascript which prompts for the password and decrypts the file. this means you have the ability to target maliciously modified versions of the javascript not only by IP but also by filename.
There are many similar browser-based things which still have the problem of being browser-based but which do not have these three problems: they store the file under a random identifier (or a hash of the ciphertext), and include a high-entropy key in the “fragment” part of the URL (the part after the
#
symbol) which is by default not sent to the server but is readable by the javascript. (Note that the javascript still can send the fragment to the server, however… it’s just that by default the browser does not.)I hope this assessment is not too discouraging, and I wish you well on your programming journey!
Arthur Besse@lemmy.mlto Privacy@lemmy.ml•Why does Signal want a phone number to register if it's supposedly privacy first?English0·2 months agoWhen it’s libre software, we’re not banned from fixing it.
Signal is a company and a network service and a protocol and some libre software.
Anyone can modify the client software (though you can’t actually distribute modified versions via Apple’s iOS App Store, due to Apple’s binary distribution system being incompatible with GPLv3… which is why unlike the Android version there are no forks of Signal for iOS) but if a 3rd party actually “fixed” the problems I’ve been talking about here then it really wouldn’t make any sense to call that Signal anymore because it would be a different (and incompatible) protocol.
Signal (the company) must approve of and participate in any change to Signal (the protocol and service).
Arthur Besse@lemmy.mlto Privacy@lemmy.ml•Why does Signal want a phone number to register if it's supposedly privacy first?English0·2 months agoDownvoted as you let them bait you. Escaping WhatsApp and Discord, anti-libre software, is more important.
I don’t know what you mean by “bait” here, but…
Escaping to a phone-number-requiring, centralized-on-Amazon, closed-source-server-having, marketed-to-activists, built-with-funding-from-Radio-Free-Asia (for the specific purpose of being used by people opposing governments which the US considers adversaries) service which makes downright dishonest claims of having a cryptographically-ensured inability to collect metadata? No thanks.
(fuck whatsapp and discord too, of course.)
Arthur Besse@lemmy.mlto Privacy@lemmy.ml•Why does Signal want a phone number to register if it's supposedly privacy first?English0·2 months agoit’s being answered in the github thread you linked
The answers there are only about the fact that it can be turned off and that by default clients will silently fall back to “unsealed sender”.
That does not say anything about the question of what attacks it is actually meant to prevent (assuming a user does “enable sealed sender indicators”).
This can be separated into two different questions:
- For an adversary who does not control the server, does sealed sender prevent any attacks? (which?)
- For an adversary who does control the server, how does sealed sender prevent that adversary from identifying the sender (via the fact that they must identify themselves to receive messages, and do so from the same IP address)?
The strongest possibly-true statement i can imagine about sealed sender’s utility is something like this:
For users who enable sealed sender indicators AND who are connecting to the internet from the same IP address as some other Signal users, from the perspective of an an adversary who controls the server, sealed sender increases the size of the set of possible senders for a given message from one to the number of other Signal users who were online from behind the same NAT gateway at the time the message was sent.
This is a vastly weaker claim than saying that “by design” Signal has no possibility of collecting any information at all besides the famous “date of registration and last time user was seen online” which Signal proponents often tout.
Arthur Besse@lemmy.mlto Privacy@lemmy.ml•Why does Signal want a phone number to register if it's supposedly privacy first?English0·2 months agoYou can configure one or more of your profiles’ addresses to be a “business address” which means that when people contact you via it it will always create a new group automatically. Then you can (optionally, on a per-contact basis) add your other devices’ profiles to it (as can your contact with their other devices, after you make them an admin of the group).
It’s not the most obvious/intuitive system but it works well and imo this paradigm is actually better than most systems’ multi-device support in that you can see which device someone is sending from and you can choose to give different contacts access to a different subset of your devices than others.
Arthur Besse@lemmy.mlto Privacy@lemmy.ml•Why does Signal want a phone number to register if it's supposedly privacy first?English0·2 months agoYou can just make a group for each contact with all of your (and their) devices in it.
Arthur Besse@lemmy.mlto Privacy@lemmy.ml•Why does Signal want a phone number to register if it's supposedly privacy first?English0·2 months agoMessages are private on signal and they cannot be connected to you through sealed sender.
No. Signal’s sealed sender has an incoherent threat model and only protects against an honest server, and if the server is assumed to be honest then a “no logs” policy would be sufficient.
Sealed sender is complete security theater. And, just in case it is ever actually difficult for the server to infer who is who (eg, if there are many users behind the same NAT), the server can also simply turn it off and the client will silently fall back to “unsealed sender”. 🤡
The fact that they go to this much dishonest effort to convince people that they “can’t” exploit their massive centralized trove of activists’ metadata is a pretty strong indicator of one answer to OP’s question.
Arthur Besse@lemmy.mlto Privacy@lemmy.ml•snowden on "nothing to hide, nothing to fear"English0·2 months agoThe fediverse condemns free speech. The fediverse bans unapproved opinions and wrong think,
“The fediverse” isn’t a monolith; different instances have wildly different moderation policies. The instance-level federation-or-defederation paradigm is certainly limiting, but the existence of moderation does not mean anyone “condemns free speech”.
On the contrary, moderated spaces are actually an essential ingredient for enabling free speech.
Calling people mentally ill because of differences of opinion (as I see you are prone to doing) has a chilling effect.
Or, put another way, persistently being a jerk to someone is a way to censor them.
I consider my volunteer moderation activities here on lemmy to be a form of free-speech activism.
proving that the fediverse is an enemy to the principals of Edward Snowden.
Where do you think Snowden hangs out online? 4chan? Maybe? But the only place I know where he posts is, sadly, twitter (where he he last posted in January). Twitter’s moderation policies are ever changing subject to the whims of one guy, but it also has never been and never will be unmoderated. Do you think twitter is also “enemy to the principals of Edward Snowden”?
But it’s fun to be on here one in awhile knowing fhe right thing to say that forces people to come undone and expose their true personality.
You’re here because the unmoderated types of spaces you’re implicitly idealizing are actually inhabited by edgelords (and spammers, and CSAM). More interesting discussions online tend to happen in well-moderated spaces.
StartPage/StartMail is owned by an adtech company who’s website boasts that they “develop & grow our suite of privacy-focused products, and deliver high-intent customers to our advertising partners” 🤔
They have a whitepaper which actually does a good job explaining how end-to-end encryption in a web browser (as Tuta, Protonmail, and others do) can be circumvented by a malicious server:
The malleability of the JavaScript runtime environment means that auditing the future security of a piece of JavaScript code is impossible: The server providing the JavaScript could easily place a backdoor in the code, or the code could be modified at runtime through another script. This requires users to place the same measure of trust in the server providing the JavaScript as they would need to do with server-side handling of cryptography.
However (i am not making this up!) they hilariously use this analysis to justify having implemented server-side OpenPGP instead 🤡
Tuta’s product is snake oil.
If you don’t care about their (nonstandard, incompatible, and snake oil) end-to-end encryption feature and just want a free email provider which protects your privacy in other ways, the fact that their flagship feature is snake oil should still be a red flag.
https://digdeeper.club/articles/browsers.xhtml has a somewhat comprehensive analysis of a large number of browsers you might consider, illuminating depressing (and sometimes surprising) privacy problems with literally all of them.
In the end it absurdly recommends something which forked from Firefox a very long time ago, which is obviously not a reasonable choice from a security standpoint. I don’t have a good recommendation, but I definitely don’t agree with that article’s conclusion: privacy features are pointless if your browser is trivially vulnerable to exploits for a plethora of old bugs, which will inevitably be the case for a volunteer-run project that diverged from Firefox a long time ago and thus cannot benefit from Mozilla’s security fixes in each new release.
However, despite its ridiculous conclusion, that page’s analysis is still useful in in deciding which of the terrible options to pick.
Arthur Besse@lemmy.mlto Privacy@lemmy.ml•Why was the thread about the free VPN Riseup removed, while there's a 2 day old thread about Windscribe, a paid VPN still active?0·5 months agoshort answer: because nobody flagged that other one. (it is deleted now too.)
re: riseup, is it even possible to use their VPN without an invite code? (i don’t think it is?)
in any case, riseup says clearly that their purpose is “to provide digital self-determination for social movements” - it is not intended for torrenting, even if it might work for it.
feel free to PM me if you want to discuss this further; i am deleting this post too. (at the time of deletion it has 8 upvotes and 33 downvotes, btw.)
Arthur Besse@lemmy.mlto memes@lemmy.world•Why do people faint at the sight of plain-text code?20·5 months agoi don’t usually cross-post my comments but I think this one from a cross-post of this meme in programmerhumor is worth sharing here:
The statement in this meme is false. There are many programming languages which can be written by humans but which are intended primarily to be generated by other programs (such as compilers for higher-level languages).
The distinction can sometimes be missed even by people who are successfully writing code in these languages; this comment from Jeffrey Friedl (author of the book Mastering Regular Expressions) stuck with me:
I’ve written full-fledged applications in PostScript – it can be done – but it’s important to remember that PostScript has been designed for machine-generated scripts. A human does not normally code in PostScript directly, but rather, they write a program in another language that produces PostScript to do what they want. (I realized this after having written said applications :-)) —Jeffrey
(there is a lot of fascinating history in that thread on his blog…)
Arthur Besse@lemmy.mlto F-Droid@lemmy.ml•What're some apps you can't go without on F-droid?0·5 months agoIt’s on F-Droid if you load their repo.
It is not allowed in F-Droid’s official repos because it is not open source; anyone can run their own F-Droid repo and distribute proprietary software from it.
Also it’s open source but I understand some people don’t like their license.
It is not open source; that is a term with an internationally recognized definition. Even FUTO themselves now acknowledge it is not.
in other news, the market price of hacked credentials for MAGA-friendly social media accounts:
📈