Perfect, we’ll just spin up an image of your machine in EC2, give it a public IP, set the default network rules to “allow any any” and we’re good. And I have no idea why the security team just all quit.
Perfect, we’ll just spin up an image of your machine in EC2, give it a public IP, set the default network rules to “allow any any” and we’re good. And I have no idea why the security team just all quit.
Along with the things others have said (Backups, Linux, Docker, Networking) I’d also recommend getting comfortable with server and network security. A lot of this is wrapped up in the simple mantra “install your goddamn updates!” But, there is more to it than that. For example, if you go with Nextcloud, read through their hardening guide and seriously consider implementing all of the recommendation. Also think through how you intend to manage both the server and instance. If this is all local, then it is easier as you can keep SSH access to the server firewalled off from the internet. If you host part of your stuff “in the cloud”, you’ll want to start looking at limiting down access and using keys to login (which is good practice for all situations). Also, never use default credentials. You may also want to familiarize yourself with the logs provided by the applications and maybe setup some monitoring around them. I personally run Nextcloud and I feed all my logs into Splunk (you can run a free instance in a docker container). I have a number of dashboards I look at every morning to keep an eye on things. E.g. Failed/successful logins, traffic sources, URI requests, file access, etc. If your server is attached to the internet it will be under attack constantly. Fail2Ban on my wireguard container banned 112 IP addresses over the last 24 hours, for 3 failed attempts to login via SSH. Less commonly, attackers try to log in to my Nextcloud instance. And my WordPress site is under constant attack. If you choose to run Wordpress, be very careful about the plugins you choose to install, and then keep them up to date. Wordpress itself is reasonably secure, the plugins are a shit-show and worse when they aren’t kept up to date.
I’m sure there are several out there. But, when I was starting out, I didn’t see one and just rolled my own. The process was general enough that I’ve been able to mostly just replace the SteamID of the game in the Dockerfile and have it work well for other games. It doesn’t do anything fancy like automatic updating; but, it works and doesn’t need anything special.
I see containers as having a couple of advantages:
That all said, if an application does not have an official container image, the added complexity of creating and maintaining your own image can be a significant downside. One of my use cases for containers is running game servers (e.g. Valheim). There isn’t an official image; so, I had to roll my own. The effort to set this up isn’t zero and, when trying to sort out an image for a new game, it does take me a while before I can start playing. And those images need to be updated when a new version of the game releases. Technically, you can update a running container in a lot of cases; but, I usually end up rebuilding it at some point anyway.
I’d also note that, careful use of VMs and snapshots can replicate or mitigate most of the advantages I listed. I’ve done both (decade and a half as a sysadmin). But, part of that “careful use” usually meant spinning up a new VM for each application. Putting multiple applications on the same OS install was usually asking for trouble. Eventually, one of the applications would get borked and having the flexibility to just nuke the whole install saved a lot of time and effort. Going with containers removed the need to nuke the OS along with the application to get a similar effect.
At the end of the day, though. It’s your box, you do what you are most comfortable with and want to support. If that’s a monolithic install, then go for it. While I, or other might find containers a better answer for us, maybe it isn’t for you.
My list of items I look for:
As for that hackernews response, I’d categorically disagree with most of it.
An app, self-contained, (essentially) a single file with minimal dependencies.
Ya…no. Complex stuff is complex. And a lot of good stuff is complex. My main, self-hosted app is NextCloud. Trying to run that as some monolithic app would be brain-dead stupid. Just for the sake of maintainability, it is going to need to be a fairly sprawling list of files and folders. And it’s going to be dependent on some sort of web server software. And that is a very good place to NOT roll your own. Good web server software is hard, secure web server software is damn near impossible. Let the large projects (Apache/Nginx) handle that bit for you.
Not something so complex that it requires docker.
“Requires docker” may be a bit much. But, there is a reason people like to containerize stuff, it avoids a lot of problems. And supporting whatever random setup people have just sucks. I can understand just putting a project out as a container and telling people to fuck off with their magical snowflake setup. There is a reason flatpak is gaining popularity.
Honestly, I see docker as a way to reduce complexity in my setup. I don’t have to worry about dependencies or having the right version of some library on my OS. I don’t worry about different apps needing different versions of the same library. I don’t need to maintain different virtual python environments for different apps. The containers “just work”. Hell, I regularly dockerize dedicated game servers just for my wife and I to play on.
Not something that requires you to install a separate database.
Oh goodie, let’s all create our own database formats and re-learn the lessons of the '90s about how hard databases actually are! No really, fuck off with that noise. If your app needs a small database backend, maybe try SQLite. But, some things just need a real database. And as with web servers, rolling your own is usually a bad plan.
Not something that depends on redis and other external services.
Again, sometimes you just need to have certain functionality and there is no point re-inventing the wheel every time. Breaking those discrete things out into other microservices can make sense. Sure, this means you are now beholden to everything that other service does; but, your app will never be an island. You are always going to be using libraries that other people wrote. Just try to avoid too much sprawl. Every dependency you spin up means your users are now maintaining an extra application. And you should probably build a bit of checking into your app to ensure that those dependencies are in sync. It really sucks to upgrade a service and have it fail, only to discover that one of it’s dependencies needed to be upgraded manually first, and now the whole thing is corrupt and needs to be restored from backup. Yes, users should read the release notes, they never do.
The corollary here is to be careful about setting your users up for a supply chain attack. Every dependency or external library you add is one more place for your application to be attacked. And just because the actual vulnerability is in SomeCoolLib.js, it’s still your app getting hacked. You chose that library, you’re now beholden to everything it gets wrong.
At the end of it all, I’d say the best app to write is the one you are interested in writing. The internet is littered with lots of good intentions and interesting starts. There is a lot less software which is actually feature complete and useful. If you lose interest, because you are so busy trying to please a whole bunch of idiots on the other side of the internet, you will never actually release anything. You do you, and fuck all the haters. If what you put out is interesting and useful, us users will show up and figure out how to use it. We’ll also bitch and moan, no matter how great your app is. It’s what users do. Do listen, feedback is useful. But, also remember that opinions are like assholes: everyone has one, and most of them stink.
Porn.
All six of them, just different forms of porn.
Na, my experience is that Defender is fine with users downloading browsers and “updates” from random Russian sites. It’s happy to let the users install that software and only bothers to log a “hey, maybe this was bad” alert some time later. Edge, on the other hand, loses it’s shit when you visit the official download sites for Chrome or FireFox.
But they pinky promised that only the “good guys” would use the “front doors”. /s
There may also be a (very weak) reason around bounds checking and avoiding buffer overflows. By rejecting anything longer that 20 characters, the developer can be sure that there will be nothing longer sent to the back end code. While they should still be doing bounds checking in the rest of the code, if the team making the UI is not the same as the team making the back end code, the UI team may see it as a reasonable restriction to prevent a screw up, further down the stack, from being exploited. Again, it’s a very weak argument, but I can see such an argument being made in a large organization with lots of teams who don’t talk to each other. Or worse yet, different contractors standing up the front end and back end.
I don’t know how anyone makes it without a password manager at this point.
Password reuse. Password reuse everywhere.
I’d agree with unbuckled above, it’s a DNS issue. If your mobile device is capable, use nslookup or dig to see what responses you are getting in different scenarios. It’s possible that your VPN software is leaking DNS queries out to the mobile data provider’s DNS servers while you are on mobile data and only using the correct DNS settings when you are on wifi. Possibly look for split tunnel settings in the VPN software, as this can create this type of situation.
You can also confirm this from the pihole side. Connect to the VPN via mobile data and browse to some website you don’t use often, but is not your own internal stuff. Then open the query log on your pihole and see if that domain shows up. I’d put money on that query not showing in the pihole query log.