Network Chuck’s earlier videos are pretty good, especially the You Suck At… series.
Unfortunately he’s been pushing AI shit lately.
I take my shitposts very seriously.
Network Chuck’s earlier videos are pretty good, especially the You Suck At… series.
Unfortunately he’s been pushing AI shit lately.
The problem is that syncing between devices is not implemented in KeePass itself but through an external tool (Nextcloud, Syncthing, or whatever else). The sync client will only see the ciphertext and won’t be able to tell which records have been changed, only that two different binary files have a common ancestor and are in conflict.
The most obvious solution is to lock and close the database when it’s not in use (which is a good practice from a security perspective too), and to sync immediately when it is changed.
tl;dr: yes, credentials are cached locally. https://github.com/dani-garcia/vaultwarden/discussions/4676
The major downside to the single file storage used by Keepass is that it’s easy to accidentally create a conflict between files on different devices if they’re not synced immediately. Conflicting files have to be merged manually or data might be lost. I’ve run into this several times with Keepass + Nextcloud. In comparison, a central master database with local cache can resolve conflicts between individual records.
deleted by creator
Tailscale should work. It uses Wireguard and does some UDP fuckery to get around the firewall and NAT (including CGNAT). I can stream Jellyfin through it at 1080p native with no significant buffering, it’ll work for music.


Is this what normies feel like when Linux users tell them to just use Linux? I have some apologies to make.


POW is a far higher cost on your actual users than the bots.
That sentence tells me that you either don’t understand or consciously ignore the purpose of Anubis. It’s not to punish the scrapers, or to block access to the website’s content. It is to reduce the load on the web server when it is flooded by scraper requests. Bots running headless Chrome can easily solve the challenge, but every second a client is working on the challenge is a second that the web server doesn’t have to waste CPU cycles on serving clankers.
POW is an inconvenience to users. The flood of scrapers is an existential threat to independent websites. And there is a simple fact that you conveniently ignored: it fucking works.
Interface configuration and DNS resolution are managed by different systems. Their file structures are different. It’s been like this for many decades, and changing it is just not worth breaking existing systems.


No numbers, no testimonials, or even anecdotes… “It works, trust me bro” is not exactly convincing.
If this is as significant an issue as you imply, please link some credible sources.
As far as I can tell, the “Chinese server” (or EU server) is just a public ID and Relay server, and necessary for the application to function unless a self-hosted server is used.
You can host the open-source ID and Relay servers for simple remote access at no cost. The pro subscription is mainly about account and device management.
services:
hbbs:
container_name: hbbs
image: rustdesk/rustdesk-server:latest
command: hbbs
volumes:
- ./data:/root
network_mode: "host"
depends_on:
- hbbr
restart: always
hbbr:
container_name: hbbr
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./data:/root
network_mode: "host"
restart: always
Mount the network share (fstab or mount.cifs), and pass the login using the username= and password= mount options. Then point the volume at the mount point’s path.
https://www.mattnieto.com/how-to-mount-an-smb-share-to-a-docker-container-step-by-step/


It’s possible that, when the ISP revokes the public address and assigns a new one, the DNS record isn’t updated immediately and still points to the old address. Then every new request would be sent to the old, invalid address.
And this is where I start shilling for Tailscale. It’s a Wireguard-based mesh VPN that is designed to work from behind firewalls, NAT, and CGNAT. It has its own internal split DNS provider, and probably some mechanism to handle public address changes that is transparent to the tunnelled traffic. You can use it to share the server with only the devices that have the client installed, or expose the server to the internet.
I’ve got it set up on my OPNSense firewall as a subnet router that advertises the subnet where my servers are, and often stream from Jellyfin over it. There’s some overhead, but it’s never been disruptive.


What sounds like gatekeeping is often a strongly worded emphasis on having the prerequisite knowledge to not just host your services, but do it in a way that is secure, resilient, and responsible. If you don’t know how to set up a network, set up a resilient storage, manage your backups, set up HTTPS and other encryption solutions, manage user authentication and privileges, and expose your services securely, you should not be self-hosting. You should be learning how to self-host responsibly. That applies to everything from Debian to Synology.
Friends don’t let friends expose their networks like Nintendo advises.


At work, we use PiSignage for a large overhead screen. It’s based on Debian and uses a fullscreen Firefox running in the labwc compositor. The developer advertises a management server (cloud or self-hosted) to manage multiple connected devices, but it’s completely optional (superfluous in my opinion) and the standalone web UI is perfectly usable.


You can absolutely use it without a reverse proxy. A proxy is just another fancy HTTP client that contacts the server on the original client’s behalf and forwards the response back to it, usually wrapped in HTTPS. A man in the middle that you trust.
All you have to do is expose the desired port(s) to all addresses:
# ...
- ports:
- 8080:8080
…and obviously to set the URL environment variables to localhost or whatever address the server uses.


I don’t know which feature you mean, can you link the documentation?


I used it for a while, and it’s a decent solution. Similar to Tailscale’s subnet router, but it always uses a relay and doesn’t do all the UDP black magic. I think it uses TCP to create the tunnel, which might introduce some network latency compared to Tailscale or bare Wireguard.
Right… my mistake, I guess I had SSH config entries in Termux and never questioned whether SSH was using those or DNS.
Still, try to find some way to check which server is being queried. It might reveal connectivity problems with the local DNS server.
@Mods, please don’t delete this. It’s a valuable lesson.