

Sadly my password stopped working like 10 years ago and I haven’t gotten a response on the forums :(
Sadly my password stopped working like 10 years ago and I haven’t gotten a response on the forums :(
There’s a ticket in the Firefox bug tracker that appears to be tracking this: https://bugzilla.mozilla.org/show_bug.cgi?id=1913601
Makes sense, because even in reader mode it shows https, and the issue happens on other major sites like Wikipedia.
That seems kind of like pointing to reverse engineering communities and saying that binaries are the preferred format because of how much they can do. Sure you can modify finished models a lot, but what you can do with just pre trained weights vs being able to replicate the final training or changing training parameters is just an entirely different beast.
There’s a reason why the OSI stipulates that code and parameters used to train is considered part of the “source” that should be released in order to count as an open source model.
You’re free to disagree with me and the OSI though, it’s not like there’s 1 true authority on what open source means. If a game that is highly modifiable and moddable despite the source code not being available counts as open source to you because there are entire communities successfully modding it, then all the more power to you.
It’s worth noting that OpenR1 have themselves said that DeepSeek didn’t release any code for training the models, nor any of the crucial hyperparameters used. So even if you did have suitable training data, you wouldn’t be able to replicate it without re-discovering what they did.
OSI specifically makes a carve-out that allows models to be considered “open source” under their open source AI definition without providing the training data, so when it comes to AI, open source is really about providing the code that kicks off training, checkpoints if used, and details about training data curation so that a comparable dataset can be compiled for replicating the results.
It really comes down to this part of the “Open Source” definition:
The source code [released] must be the preferred form in which a programmer would modify the program
A compiled binary is not the format in which a programmer would prefer to modify the program - it’s much preferred to have the text file which you can edit in a text editor. Just because it’s possible to reverse engineer the binary and make changes by patching bytes doesn’t make it count. Any programmer would much rather have the source file instead.
Similarly, the released weights of an AI model are not easy to modify, and are not the “preferred format” that the internal programmers use to make changes to the AI mode. They typically are making changes to the code that does the training and making changes to the training dataset. So for the purpose of calling an AI “open source”, the training code and data used to produce the weights are considered the “preferred format”, and is what needs to be released for it to really be open source. Internal engineers also typically use training checkpoints, so that they can roll back the model and redo some of the later training steps without redoing all training from the beginning - this is also considered part of the preferred format if it’s used.
OpenR1, which is attempting to recreate R1, notes: No training code was released by DeepSeek, so it is unknown which hyperparameters work best and how they differ across different model families and scales.
I would call “open weights” models actually just “self hostable” models instead of open source.
Partially yes, the tricky thing is that when using network_mode: "service:tailscale"
(presumably on the caddy container since that’s what needs to receive traffic from the tailscale network), you won’t be able to attach the caddy container to any networks since it’s using the tailscale network stack. This means that in order for caddy to reach your containers, you will need to add the tailscale container itself to the relevant networks. Any attached containers will be connected as well.
(Not sure if I misread the first time or if you edited but the way you say it is right, add the tailscale container to the proxy network so that caddy will also be added and can reach the containers)
Here’s the super condensed version of what matters for connecting traefik/caddy to a VPN like wireguard/tailscale.
My traefik compose:
services:
wireguard:
container_name: wireguard
networks:
- ingress
traefik:
network_mode: "service:wireguard"
depends_on:
- wireguard
command:
- "--entryPoints.web.proxyProtocol.trustedIPs=10.13.13.1" # Trust remote tunnel IP, the WG container is 10.13.13.2
- "--entrypoints.websecure.address=:443"
- "--entryPoints.websecure.proxyProtocol.trustedIPs=10.13.13.1"
- "--entrypoints.web.http.redirections.entrypoint.to=websecure"
- "--entrypoints.web.http.redirections.entrypoint.scheme=https"
- "--entrypoints.web.http.redirections.entrypoint.priority=100"
- "--providers.docker.exposedByDefault=false"
- "--providers.docker.network=ingress"
networks:
ingress:
external: true
And then in a service’s docker-compose:
services:
ui:
image: myapp
read_only: true
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp.rule=Host(`xxxx.xxxx.xxxx`)"
- "traefik.http.services.myapp.loadbalancer.server.port=80"
- "traefik.http.routers.myapp.entrypoints=websecure"
- "traefik.http.routers.myapp.tls.certresolver=mytlschallenge"
networks:
- ingress
networks:
ingress:
external: true
(edited to fix formatting on mobile)
I’ve done something similar but I’m not sure how helpful my example would be because I use wireguard instead of tailscale and traefik instead of caddy.
The principle is the same though, iirc I have my traefik container set to network_mode: “service:wireguard” so that the traefik container uses the wireguard container’s network stack. That way the traefik container also sees the wireguard interface and can receive traffic going to the wireguard IP. Then at the other end of the wireguard tunnel I can use haproxy to pass traffic to the wireguard IP through the tunnel and it automatically hits traefik.
If the battery inverter in the Anker box doesn’t pass through grid power then I think you would use an automatic transfer switch that switches between mains and battery inverter depending on which is powered. I had dreams of offsetting my homelab power with solar + battery + inverter.
If the machine doesn’t boot then you can use this to access the bios and boot a recovery environment of your choice remotely using pxeboot.
Idk about that, but I have trained my cat to shake, high five, nose kiss, sit, stand on hind legs, and scratch her post on command. I’m currently working on roll over and spin.
Immich has a setting that does automatic photo backup over WiFi, I use the android app as a Google photos replacement. You can choose however many folders on your phone as you want (I just do camera roll) and enable only backup over WiFi and it backs up all the photos in original quality. I self-host the server on my Synology with a reverse proxy (can’t forward ports at my current place due to cgnat) so I can access it from anywhere.
I believe the app is cross platform so the iPhone version should be identical to the android one.
Woah federation would be huge!
Someday I would love to be able to share and receive shared photos / albums to and from users on different servers. Especially if it lets me sync the original files so that I can keep a copy in case their server goes down. It would also be neat if you could enable activitypub so that your account could show up as a fediverse user that people can follow for public or approved follower only posts, pixelfed compatibility would be super cool.
Keep in mind that if you set up raid using zfs or btrfs (idk how it works with other systems but that’s what I’ve used) then you also get scrubs which detect and fix bit rot and unrecoverable read errors. Without that or a similar system, those errors will go undetected and your backup system will backup those corrupted files as well.
Personally one of the main reasons I used zfs and now btrfs with redundancy is to protect irreplaceable files (family memories and stuff) from those kinds of errors, as I used to just keep stuff on a hard drive until I discovered loads of my irreplaceable vacation photos to be corrupted, including the backups which backed up the corruption.
If your files can be reacquired, then I don’t think it’s a big deal. But if they aren’t, then I think having scrubs or integrity checks with redundancy so that issues can be repaired, as well as backups with snapshots to prevent errors or mistakes from messing up your backups, is a necessity. But it just depends on how much you value your files.
My primary use case is safeguarding my important personal artifacts (family photos, digitized paperwork, encryption key / account recovery / 2FA backups) against drive failure (~2TB), followed by my decently sized Plex server (23TB), immich, nextcloud, and various other small things like selfhosted bitwarden, grocy, ollama, and stuff like that.
I run all of my stuff off of a 6 bay Synology (more drives helps with capacity efficiency as double redundancy with 6 drives costs you 30% and I wanted to be protected against drive failures during rebuilding) with an Intel nuc on top to run plex/jellyfin transcoding using quicksync instead of loading the poor nas with cpu transcoding, I also run ollama on the nuc since it has faster cores than the nas.
Wait you can train the Futo keyboard? I tried it a while ago and noticed the poor accuracy and decided to shelve it for a while.
Either that or charging a micro transaction for loading the page. But yeah the goal is to make it cost a small amount that is insignificant to a regular user but adds up to a huge amount at the scale of a spam farm. And it’s also the same rationale behind hashing passwords with multiple rounds. It adds a tiny lag when you log in correctly but adds an insane amount of work if you’re checking every phrase in a password cracking dictionary using an offline attack because it adds up. (In the online scenario you just block them after a few attempts)
I’ve done a backup swap with friends a couple times. Security wasn’t much of a worry since we connected to each other’s boxes over ssh or wireguard or similar and used tools that allowed encryption. The biggest challenge for us was that in my selfhosting friend group we all prefer different protocols so we had to figure out what each of us wanted to use to connect and access filesystems and set that up. The second challenge was ensuring uptime and that the remote access we set up for each other stayed up - and that’s what killed the project as we all eventually stopped maintaining the remote access and nobody seemed to care - so if I were to do it again I would make sure all participants have alerts monitoring their shared endpoint.
Federation sounded interesting so I looked at the website and it sounds like on prem can’t yet federate with people using “cloud” which I guess is the hosted version - they can only federate with other on-prem instances.
It looks promising though and would be cool to host my own instance and still chat with friends.
I’m on unrooted lineage with mindthegapps / Google play services with my Google Services Framework ID registered with Google, but I still have to make 3 attempts to log in to my bank with the first 2 attempts always giving a vague error like “we’re not sure why we couldn’t connect”, similar with fidelity. Using a password manager so I’m entering the same credentials every time.
(Edit: in the case of fidelity, instead of faking a connection issue it tells me my account is blocked and to call support to unblock it - that’s also fake because I called once and they said my account wasn’t locked and trying to log in a second time always works)
My understanding is that it’s impossible to pass strong integrity unless you’re using the stock unmodified rom with the bootloader locked.
I changed banks last week and the new bank (Aspiration) logs in fine the first time every time.
It sounds like the situation is better with graphene but I find it a lot easier to switch banks than roms.
Sadly the only power you have is to switch banks, and let them know when you close your account why. I switched from Ally to Aspiration partially because of this.