I think it’s handguns and anything semi-automatic or automatic that are designed for violence.
Basically anything that makes it simple to shoot more than twice, or makes it easy/convenient to carry.
Bolt action or double barrel shotguns are for hunting or actual self defence.
They are tools.
Pump actions, handguns, semi-autos and automatics are for “I have made a very bad mistake”.
If your rifle is semi-automatic, have there ever been actual occasions where you have gone “thank god this is semi-automatic”?
I’m currently reconsidering using a couple mikrotik for some layer 3 hardware offloading.
Not really homelab, but close.
I have a project that gets integrated with another network for an event. I’m thinking of using 2x crs504 (cause I’m using mlag for servers, think vrrp or whatever for “public” (it’s all internal) ip) and seeing if I can get l3hw working as a router.
While I could sit on a subnet of the “host” network, having a gateway that traffic goes through allows me to test and prove everything for my system in my homelab, with just the final integration being a do-in-a-time-crunch problem.
I’m already using the crs504s for networking (I bought them ages ago, thinking 25gbps was going to be as easy as 10gbps. It’s all running at 10gbps), and this saves having to use something as a router, cuts down on rack space, all sorts of benefits. I think.
Anyone have any experience with mikrotik l3hw offloading?
My actual homeland is just a NAS and some networking. It’s a small flat, it’s just me. Not complicated, no need to give me more headaches!
Narrator: They hadn’t.
XML is extremely verbose.
Again, requires some other tooling to generate (I feel I can point to JavaScript for an example of XML manipulation)
I feel I spend more time iterating yaml.
There isn’t any tooling that actually helps you write it.
I feel like there is a gap in the market for a solution that uses typescript, typed python or some other type-able scripting language, which then generates the yaml files.
A language that has language servers, intellisense, all the modern dev tools. Schemas are provided as simple type descriptors. And whatever script you write then produces the correct result.
Some sort of framework on top of that to provide an opinionated workflow, and some tooling to lint/validate/produce.
And the result is yaml files which can be checked/diffed against in-place config, and version controlled for consistency.
I guess it’s like HTML if it tried to also adopt it’s own scripting language. Whereas JS interacts with the HTML DOM. Sure, it has quirks, but essentially modified a config.
I’ve never found a nice way writing YAML with variables and configurability.
Trying to use yaml to natively describe how a yaml config should be produced is broken. It diverges from the underlying schema, and (because it’s .yaml
) isn’t distinguishable from any other yaml.
Things like helm treat yaml as a template. And I don’t think language servers & tooling are up to scratch yet (happy to be corrected). So basic yaml formatters shit the bed.
Yaml is a computer readable config file that tries to be human readable, and fails at being actually useful.
Why projects try and make it useful, I will never understand.
I honestly think generating yaml from something like python would be a million times easier.
But then tools like ansible adopt yaml to essentially be a scripting language. As opposed to creating an actually decent solution that uses both python (to generate) and yaml (to apply).
Or whatever language.
uses yaml for scripting so it’s clean and readable.
Eh…
I guess yaml is fine.
I hate the significance of whitespace, and the fact that I cannot find any editor that can auto-format. Which are both related, I guess: there is no way to know a yaml document is actually correctly formatted without knowing the intended schema.
Whereas JSON doesn’t have this ambiguity. But JSON has it’s own drawbacks.
I don’t think you could of handled the correction any better
ducks
For point number 2, security through obscurity is not security.
Besides, all issued certificates are logged publicly. You can search them here https://crt.sh
Nginx Proxy Manager is easy to set up and will do LE acme certs, has a nice GUI to manage it.
If it’s just access to your stuff for people you trust, use tailscale or wireguard (or some other VPN of your choice) instead of opening ports to the wild internet.
Much less risk
Default config is defined in the firmware. It can’t be deleted or changed (well, easily. I think there is a reseller option to have a custom default config).
The “no default config” means the default config will not be applied after the reset.
If you reset it again without checking “no default config”, then the default config will be applied.
“No default config” is very useful for applying your own config script. It gives you a blank canvas, making scripting a lot easier!
I have my “config.rsc” file that has the required configuration. And I have a “reset.auto.rsc” file that only has the command to reset the mikrotik with no defaults and to run the “config.rsc” script after reset.
“filename.auto.rsc” will be executed as soon as it gets FTPd (it’s a feature of mikrotik).
I use a bash script that FTPs the config.rsc file to the mikrotik, then the reset.auto.rsc file.
Makes it trivial to tweak the config then apply it, and I get all the config for the devices in easy to edit/diff script files.
If you want remote access to your home services behind a cgnat, the best way is with a VPS. This gives you a static public IP that your services connect to, and that you can connect to when out and about.
If you don’t want the traffic decrypted on the VPS, then tunnel the VPN back to your homelab.
As the VPN already is encrypted, there is no point in re-encrypting it between the vps and homelab.
Rathole https://github.com/rapiz1/rathole is one of the easiest I have found for this.
Or you can do things with ssh tunnels.
For VPN, wireguard is very good
I would say the more regular expiration and renewal of an LE cert is better.
It’s an ongoing check instead of an annual check.
At the homelab scale, proxmox is great.
Create a VM, install docker and use docker compose for various services.
Create additional VMs when you feel the need. You might never feel the need, and that’s fine. Or you might want a VM per service for isolation purposes.
Have proxmox take regular snapshots of the VMs.
Every now and then, copy those backups onto an external USB harddrive.
Take snapshots before, during and after tinkering so you have checkpoints to restore to. Copy the latest snapshot onto an external USB drive once you are happy with the tinkering.
Create a private git repository (on GitHub or whatever), and use it to store your docker-compose files, related config files, and little readmes describing how to get that compose file to work.
Proxmox solves a lot of headaches. Docker solves a lot of headaches. Both are widely used, so plenty of examples and documentation about them.
That’s all you really need to do.
At some point, you will run into an issue or limitation. Then you have to solve for that problem, update your VMs, compose files, config files, readmes and git repo.
Until you hit those limitations, what’s the point in over engineering it? It’s just going to over complicate things. I’m guilty of this.
Automating any of the above will become apparent when tinkering stops being fun.
The best thing to do to learn all these services is to comb the documentation, read GitHub issues, browse the source a bit.
Bitwarden is cheap enough, and I trust them as a company enough that I have no interest in self hosting vaultwarden.
However, all these hoops you have had to jump through are excellent learning experiences that are a benefit to apply to more of your self hosted setup.
Reverse proxies are the backbone of hosting and services these days.
Learning how to inspect docker containers, source code, config files and documentation to find where critical files are stored is extremely useful.
Learning how to set up more useful/granular backups beyond a basic VM snapshot in proxmox can be applied to any install anywhere.
The most annoying thing about a lot of these is that tutorials are “minimal viable setup” sorta things.
Like “now you have it setup, make sure you tune it for production” and it just ends.
And finding other tutorials that talk about the next step, to get things production ready, often reference out dated versions, or have different core setups so doesn’t quite apply.
I understand your frustrations.
Yeh, but my ZFS partition is a COW
Nano is useful because it is everywhere.
There are better editors, but being familiar with nano and it’s shortcuts means you can edit files pretty much anywhere.
Same with knowing the basics of vim (like being able to edit, exit and save)
If your windows computer makes an outbound connection to a server that is actively exploiting this, then yes: you will suffer.
But having a windows computer that is chilling behind a network firewall that is only forwarding established ipv6 traffic (like 99.9999% of default routers/firewalls), then you are extremely extremely ultra unlucky to be hit by this (or, you are such a high value target that it’s likely government level exploits). Or, you are an idiot visiting dogdy websites or running dodgy software.
Once a device on a local network has been successfully exploited for the RCE to actually gain useful code execution, then yes: the rest of your network is likely compromised.
Classic security in layers. Isolatation/layering of risky devices (that’s why my homelab is on a different vlan than my home network).
And even if you don’t realise your windows desktop has been exploited (I really doubt that this is a clean exploit, you would probably notice a few BSOD before they figure out how to backdoor), it then has to actually exploit your servers.
Even if they turn your desktop into a botnet node, that will very quickly be cleaned out by windows defender.
And I doubt that any attacker will have time to actually turn this into a useful and widespread exploit, except in targeting high value targets (which none of us here are. Any nation state equivalent of the US DoD isn’t lurking on Lemmy).
It comes back to: why are you running windows as a server?
ETA:
The possibility that high value targets are exposing windows servers on IPv6 via public addresses is what makes this CVE so high.
Sensible people and sensible companies will be using Linux.
Sensible people and sensible companies will be very closely monitoring what’s going on with windows servers exposed by ipv6.
This isn’t an “ipv6 exploit”. This is a windows exploit. Of which there have been MANY!
If the router/gateway/network (IE not local) firewall is blocking forwarding unknown IPv6, then it’s a compromised server connected to via IPv6 that has the ability to leverage the exploit (IE your windows client connecting to a compromised server that is actively exploiting this IPv6 CVE).
It’s not like having IPv6 enabled on a windows machine automatically makes it instantly exploitable by anyone out there.
Routers/firewalls will only forward IPv6 for established connections, so your windows machine has to connect out.
Unless you are specifically forwarding to a windows machine, at which point you are intending that windows machine to be a server.
Essentially the same as some exploit in some service you are exposing via NAT port forwarding.
Maybe a few more avenues of exploit.
Like I said. Why would a self-hoster or homelabber use windows for a public facing service?!
So you have local DNS set up?
If you ping (or dig) speed.mydomain.local, does it resolve the same address as local_ip?
Considering you are accessing local_ip:3000 and the domain on port 443, there is clearly a firewall somewhere redirecting packets or a reverse proxy on the domain but not on local_ip:3000
Follow the port chain, forwarding, proxying etc. One of those will be bottlenecking. Then figure out why
Edit:
Just because your ISP speed is 100mbps and you are seeing 500mbps, doesn’t mean the connection isn’t hairpinning through your router via it’s public IP (as in, the traffic never leaves your router, but still goes through it)