

Dealing with a Ds-lite connection for a fiber optic provider in latest opnsense.
It is excruciating how difficult is to run it reliable
Dealing with a Ds-lite connection for a fiber optic provider in latest opnsense.
It is excruciating how difficult is to run it reliable
Some clarifications :
The 3 2 1 rule applies only for the data. Not the backup, in my case I have the real/live data, then a daily snapshot in the same volume /pool and a external off-site backup
For the databases you got misleading information, you can copy the files as they are BUT you need to be sure that the database is not running (you could copy the data and n the middle of a transaction leading to some future problems) AND when you restore it, you need to restore to the exact same database version.
Using the export functionality you ensure that the data is not corrupted (the database ensure the correctness of the data) and the possibility to restore to another database version.
My suggestion, use borgbackup or any other backup system with de duplication, stop the docker to ensure no corruptions and save everything. Having a downtime of a minute every day is usually not a deal breaker for home users
Of you already have a will the most secure, proof idiot way I’d to add that key + instructions to the will. Get some lawyers on board for that and it will work.
If you still have concerns about having the full key on a single place, add a topt or second way of identification and distribute it between your heirs.
Sometime, the old fahion way is the best one by far.
Try with kasm.
Honestly, you don’t feel the lag of the connection (unless is a severe limited one) it also allows multiuser simultaneous connections.
Check and come back to ith yiur feedback!!!
Setup a dns guardian system to make sure that they are not accessing some forbidden web pages (like only fans or Instagram). This extra layer of annoyance is pretty useful because there a lot of apps for kids with obvious holes to side load web pages that always keep me wondering if they were there by intention.
Apart of that I would setup a restricted shell to execute only the approved apps.
I would recommend an LDAP sever for user Auth.
There you can create/authenticate user with a central repo in a machine independent fashion. Also having the possibility to allow /egate specific services from the central database is a big plus.
It seems difficult at the very beginning but it quickly pays off. Give it a try
Yes, definitely you will get a better deal going with a home made solution here.
Buuuut, there is an important point to highlight: The probability of synology fucking your data up is much lower than the average selfhoster. Unless you already know almost perfectly pros, cons, and how to solve problems without a data loss, you are not better than the average.
As an example, I went with a synology box even if I consider myself better than the average because the data in my nas is extremely (but really extremely) important to me and my wife. And the price was a reasonable fine in order to keep that data safe.
So, evaluate yourself : if. The data is really important and you are not a really good sysadmin then go with a professional solution. If not then go in DIY solution and learn in the process.
Just my two cents
Totally overkill if you cut the specs to the half I have the feeling they are still overkill
The only point are the hdds and the mass storage, I can not decide if it is a lot or not, but for your list I would say that you can even go one order of magnitude down. But it mainly depends if the number of Linux isos you want to archive
My points are totally in the other direction:
And then as a second league that lean the balance:
That’s all from my side
Totally agree with the first point, it is a limitation, and the guest wifi sticking to a eth port is just a patch. One that works but still a patch.
But I don’t see the point of the prefixes. What do you mean? I also have a custom domain and a local dns server y can use the domain even internally. I just simple ignore that…
Fritzbox boxes.
They tick all the checkboxes
It is a well known brand in Germany but pretty unknown outside that country. Honestly it is the best bang for buck I was able to get.
Honestly, I would spend 10 minutes checking on them
No idea at all, but I am highly interested in your experience. So it would be great if you could came here back to share it with us
This is the answer
Yes, it will be enough if your services are not exposed via port forwarding , tailscale / zerotier are super convenient for this.
Honestly, if I were you I would start thinking in having a small computer just to act like a proxy / firewall of you synology, or even better, just run the applications on that computer and let the nas only serve files and data.
It is much easier to support, maintain and hardening a debain with a minimal intallation than nay synology box just because the amount of resources available to do so. In this easy way you could extent the life of your nas far beyond the end of life of the Sw
Don’t make it available from internet. This will solve the issue.
If it is not possible, once the cve is published and properly described, perhaps there is another way to secure it via an external proxy or even a waf.
If you have unsupported Sw, it is always a pain in the ass to keep them secure so try to figure out always the first point
Can someone be so kind to explain me what I am seeing?
Because it seems like I am not celvee enough to get it
The answer is mTLS.
But you will run into the key distribution problem. But if your number of devices is manageable, it could be the solution
This thing reduces the attack surface of the inmich installation.
If it is good, or bad or fitting to your security model can only be said by you. But honestly it sounds like a sensible thing to do
You will need to explain a bit further this statement to mild knowledged internet stranger…
Because the point of waf is exactly about reducing the exposed surface…