

I’d suggest maybe testing with a plain Debian or Fedora install. Just enable KVM and install virt-manager, and create the environment that way.
I’d suggest maybe testing with a plain Debian or Fedora install. Just enable KVM and install virt-manager, and create the environment that way.
Unfortunately I’m not very familiar with Cloudstack or Proxmox; we’ve always worked with KVM using virt-manager and Cockpit.
Our usual method is to remove the default hard drive, reattach the qcow file as a SCSI device, and then we modify the SCSI controller that gets created to enable queuing. I’m sure at some point I should learn to do all this through the command line, but it’s never really been relevant to do so.
The relevant sections look like this in one our prod VMs:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/XXX.qcow2' index='1'/>
<backingStore/>
<target dev='sdb' bus='scsi'/>
<alias name='scsi0-0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
<driver queues='6'/>
<alias name='scsi0'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</controller>
The driver queues=‘X’ line is the part you have to add. The number should equal the number of cores assigned to the VM.
See the following for more on tuning KVM:
What are your disk settings for the KVM environments? We use KVM at work and found that the default configuration loses you a lot of performance on disk operations.
Switching from SATA to SCSI driver, and then enabling queues (set the number equal to your number of cores) dramatically speeds up all disk operations, large and small.
On mobile right now but I’ll try to add some links to the KVM docs later.
Been using Xpipe for probably over a year now. It’s amazing and I wholeheartedly recommend it.
I just use a DDNS updater. That’s honestly good enough for most purposes.
Alternatively, you could use a service like Zerotier, Tailscale or Netbird to create a virtual private LAN connection to a free Oracle VPS, then route the traffic from the VPN to your home network.
So, basically, the trick to setting this up in Caddy is more one of not doing anything. Caddy is so much smarter than Nginx that it just figures out all this stuff for you.
So this:
# Notes Server - With WebSocket
server {
listen 80;
server_name notes.domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name notes.domain.com;
ssl_certificate /etc/letsencrypt/live/notes.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/notes.domain.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://localhost:5264/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 3600;
proxy_send_timeout 3600;
}
}
in Caddy becomes this:
auth.domain.com {
reverse_proxy IP_ADDRESS:8264
}
Yeah. This is why I love Caddy.
In the end I only had to include a couple of the header modifiers to get everything working. So my finished file looked like this:
auth.domain.com {
reverse_proxy IP_ADDRESS:8264 {
header_up Host $host
header_up X-Real-IP $remote_addr
}
}
notes.domain.com {
reverse_proxy IP_ADDRESS:5264
}
events.domain.com {
reverse_proxy IP_ADDRESS:7264
}
mono.domain.com {
reverse_proxy IP_ADDRESS:6264
header / Cache-Control "public, no-transform"
header / X-Cache-Status $upstream_cache_status
}
Obviously, update “domain.com” and “IP_ADDRESS” to the appropriate values. I’m actually not even 100% sure that all of that is necessary, but my setup seems to be working, including the monograph server.
One very important aside though; in your .env file, don’t do this:
AUTH_SERVER_PUBLIC_URL=https://auth.domain.com/
NOTESNOOK_APP_PUBLIC_URL=https://notes.domain.com/
MONOGRAPH_PUBLIC_URL=https://mono.domain.com/
ATTACHMENTS_SERVER_PUBLIC_URL=https://files.domain.com/
Those trailing slashes will mess everything up. Strip them off so it looks like this:
AUTH_SERVER_PUBLIC_URL=https://auth.domain.com/
NOTESNOOK_APP_PUBLIC_URL=https://notes.domain.com/
MONOGRAPH_PUBLIC_URL=https://mono.domain.com/
ATTACHMENTS_SERVER_PUBLIC_URL=https://files.domain.com/
Took me a while to work that one out.
I might still need to tweak some of this. I’m getting an occasional “Unknown network error” in the app, but all my notes are syncing, monographs publish just fine, and generally everything else seems to work, so I’m not entirely sure what the issue is that Notesnook is trying to tell me about, or if it’s even something I need to fix.
Edit: OK, the issue was that I didn’t have files.domain.com setup. Just directly proxying it solves one error, but creates another, so I’ll need to play with that part a little more. It’s probably down to Minio doing it’s own proxying on the backend (because it rewrites http requests at 9009 to https at 9090). Will update when I get it working. Anyway, for now everything except attachments seem to work.
+1 for LocalSend. Well worth checking out.
Seconding this, I really can’t see the point of encryption on local only connections. Are you ready worried about someone hacking your WiFi?
Anyway, if you do want to do a reverse proxy, I’ll make my usual recommendation of Caddy instead. It handles certificates for you, using Let’s Encrypt, so there’s no need to add exceptions in your browser. And reverse proxying with Caddy is literally a one line config.
I’m a huge fan of Caddy and I wish more people would try it. The utter simplicity of the config file is breathtaking when you compare it with Apache or Nginx. Stuff that takes twenty or thirty lines in other webservers becomes just one in Caddy.
Well, thanks to your guidance I was able to get my own server up and running. Converting the reverse proxy to Caddy was very easy, but then everything involving Caddy is stupidly easy. That also removed all the steps involving certs.
I’m going to try leaving out the subdomain for the S3 storage. Notesnook doesn’t seem to require it in the setup, whereas the other four addresses are specifically requested, and I feel like it would be better for security to not have Minio directly accessible over the web.
I also really want to try attaching their web app to this. They don’t seem to have their own docker release for it though, unless I missed something.
Hi, thank you so much for posting this. It’s a much better tutorial than the one provided by the Notesnook devs.
With that being said, I think it would be really helpful to have a bit more of a breakdown of what these individual components are doing and why. For example, what is the actual purpose of strapping a Monograph server onto this stack? Is that needed for the core Notesnook server to work, or is it optional? Does it have to be accessible over the web or could we leave that as a local access only component? Same questions for the S3 storage. Similarly, it would be good to get a better understanding of what the relationship is between the identity server and the main server. Why do both those components have to be web accessible at different subdomains?
This sort of information is especially helpful to anyone trying to adapt your process; for example, if they’re using a different reverse proxy, or if they wanted to swap in a different storage back-end.
Anyway, thanks again for all the time you put into this, it is really helpful.
Idk if there’s something like LineageOS for AndroidTV, that would be great.
Agreed, I would love this.
As others have suggested, OSMC is OK, but personally I prefer having Android so that I can use SmarttubeNext and access native apps for stuff like Jellyfin, Dropout, Nebula, etc. For years I played with various Linux options, but in the end I ditched it all for an Nvidia Shield and I couldn’t be happier with the results.
Your specific questions have already been answered elsewhere in this thread, but I just want to add my usual plea to not use Portainer.
I’ve spent a lot of time with Portainer, both in my homelab and at work, and in both environments I eventually replaced it with Dockge, which is far superior, both for experienced users and newbies.
Basically, the problem with Portainer is that it wants you to be in an exclusive relationship with it. For example, if you create containers from the command like like you described, Portainer only has very limited control over them. Dockge, on the other hand, is very comfortable switching back and forth between command line and UI. In Portainer when you do create your compose files from the UI, it then becomes very difficult to interact with them from the command line. Dockge doesn’t give a shit, and keeps all the files in an easy location you choose.
Dockge will also do what you described in 5) take a docker command and turn it into a compose file. And it gives you much better feedback when you screw up. All in all its just a better experience.
Ooh, I will be giving this a go!
It’s ugly and less user friendly. Not a fan.
I mean, for anything where you’re willing to trust the container provider not to push breaking changes, you can just run Watchtower and have it automatically update. That’s how most of my stuff runs.
There’s no good answer to that because it depends entirely on what you’re running. In a magical world where every open source project always uses the latest versions of everything while also maintaining extensive backwards compatibility, it would never be a problem. And I would finally get my unicorn and rainbows would cure cancer.
In practice, containers provide a layer of insurance that it just makes no sense to go without.
Personally, I always like to use containers when possible. Keep in mind that unlike virts, containers have very minimal overhead. So there really is no practical cost to using them, and they provide better (though not perfect) security and some amount of sandboxing for every application.
Containers mean that you never have to worry about whether your VM is running the right versions of certain libraries. You never have to be afraid of breaking your setup by running a software update. They’re simpler, more robust and more reliable. There are almost no practical arguments against using them.
And if you’re running multiple services the advantages only multiply because now you no longer have to worry about running a bespoke environment for each service just to avoid conflicts.
So, unfortunately, this latest update seems to have created a lot of issues. First off, MobaXTerm support appears to be borked. Second, attempting to connect directly to LXC containers throws an error because I haven’t linked a WSL2 instance for X11, even though X forwarding is not enabled for the connection.