A person with way too many hobbies, but I still continue to learn new things.

  • 1 Post
  • 21 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle

  • Keep an eye out for people trashing perfectly good desktop machines because Windows 10 is being retired.

    If you want a server that “does it all” then you would need to get the most decked-out top of the line server available… Obviously that is unrealistic, so as others have mentioned, knowing WHAT you want to run is required to even begin to make a guess at what you will need.

    Meanwhile here’s what I suggest – Grab any desktop machine you can find to get yourself started. Load up an OS, and start adding services. Maybe you want to run a personal web server, a file server, or something more extensive like Nextcloud? Get those things installed, and see how it runs. At some point you will start seeing performance issues, and this tells you when it’s time to upgrade to something with more capability. You may simply need more memory or a better CPU, in which case you can get the parts, or you may need to really step up to something with dual-CPU or internal RAID. You might also consider splitting services between multiple desktop machines, for instance having one dedicated NAS and another running Nextcloud. Your personal setup will dictate what works best for you, but the best way to learn these things is to just dive in with whatever hardware you can get ahold of (especially when it’s free), and use that as your baseline for any upgrades.


  • But why doesn’t it ever empty the swap space? I’ve been using vm.swappiness=10 and I’ve tried vm.vfs_cache_pressure at 100 and 50. Checking ps I’m not seeing any services that would be idling in the background, so I’m not sure why the system thought it needed to put anything in swap. (And FWIW, I run two servers with identical services that I load balance to, but the other machine has barely used any swap space – which adds to my confusion about the differences).

    Why would I want to reduce the amount of memory in the server? Isn’t all that cache memory being used to help things run smoother and reduce drive I/O?


  • And how does cache space figure in to this? I have a server with 64GB of RAM, of which 46GB is being used by system cache, but I only have 450MB of free memory and 140MB of free swap. The only ‘volatile’ service I have running is slapd which can run in bursts of activity, otherwise the only thing of consequence running is webmin and some VMs which collectively can use up to 24GB (though they actually use about half that) but there’s no reason those should hit swap space. I just don’t get why the swap space is being run dry here.




  • But is it decentralized? Do the results from multiple spiders get added to give everyone the same quality searches or do I need to scan the whole internet myself?

    [edit] I was looking at this earlier and couldn’t find the info. Started searching again just now and found it immediately… of course… (The answer is YES)


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldDecentralized Search Engine
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 months ago

    Yep, that’s exactly what I was looking at (https://github.com/searx/searx). As I said, it was a QUICK dive but the wording was enough to make me shy away from it. For all the years I’ve been running servers, I won’t put up anything that requires the latest/greatest of any code because that’s where about 90% of the zero-days seem to come from. Almost all the big ones I’ve seen in the last few years where things that made me panic until I realized that oh, if your updates are more than a year old then none of this affects you. And the one that DID affect me had already been updated through a security release.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldDecentralized Search Engine
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    4 months ago

    I just did a quick dive into this and have some concerns. SearX appears to no longer be maintained and was last updated three years ago. SearXNG was forked to use more recent libraries but there were concerns that those are not always stable or fully vetted. There were also concerns that SearXNG did not follow the same concerns for user privacy. It’s a shame that SearX shut down, that one actually sounds like a project I would have jumped on.


  • More drives also equals larger power consumption so you would need a larger battery backup.

    It also means more components prone to failure which increases your chance of losing data. More drives means more moving parts and electrical connections including data and power cables, backplanes, and generated heat that you need to cool down.

    I’d be more concerned over how many failures you’re seeing that makes you think smaller drives would be the better option? I have historically used old drives from ebay or manufacturer refurbs, and even the worst of those have been reliable enough to only have to replace drives once every year or two. With RAID6 or raidz2 you should be plenty secure during a rebuild to prevent data loss. I wouldn’t consider using a lot of little drives unless it’s the only option I had or if someone gave them away for free.



  • Are you sure about that? Ever hear about this supposed predictable network names in recent linux versions? Yeah those can change too. I was trying to set up a new firewall with two internal NICs plus a 4-port card, and they kept moving around. I finally figured out that if I cold-booted the NICs would come up in one order, and if I warm-booted they would come up in a completely different order (like the ports on the card would reverse which order they were detected). This was completely the fault of systemd because when I installed an older linux and used udev to map the ports, it worked exactly as predicted. These days I trust nothing.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldHelp with ZFS Array
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    8 months ago

    OP – if your array is in good condition (and it looks like it is) you have an option to replace drives one by one, but this will take some time (probably over a period of days). The idea is to remove a disk from the pool by its old name, then re-add the disk under the corrected name, wait for the pool to rebuild, then do the process again with the next drive. Double-check, but I think this is the proper procedure…

    zpool offline poolname /dev/nvme1n1p1

    zpool replace poolname /dev/nvme1n1p1 /dev/disk/by-id/drivename

    Check zpool status to confirm when the drive is done rebuilding under the new name, then move on to the next drive. This is the process I use when replacing a failed drive in a pool, and since that one drive is technically in a failed state right now, this same process should work for you to transfer over to the safe names. Keep in mind that this will probably put a lot of strain on your drives since the contents have to be rebuilt (although there is a small possibility zfs may recognize the drive contents and just start working immediately?), so be prepared in case a drive does actually fail during the process.


  • That is definitely true of zfs as well. In fact I have never seen a guide which suggests anything other than using the names found under /dev/disk/by-id/ or /dev/disk/by-id/uuid and that is to prevent this very problem. If the proper convention is used then you can plug the drives in through any available interface, in any order, and zfs will easily re-assemble the pool at boot.

    So now this begs the question… is proxmox using some insane configuration to create drive clusters using the name they happen to boot up with???






  • I think I missed something in your description, but what are you running on your local server? I think most people set up postfix to relay the emails over to gmail or whoever, and there are options in postfix for backwards compatibility with Outlook or even Microsoft Mail so your wife could use whatever client she wants. If you don’t have a local mail server set up then this is probably what you want to do. This method allow a local or remote connection from any client so you could run K9 on your phone instead of a VPN.

    For opening such a setup to the internet (and allowing access from anywhere), make sure you have strong passwords on your accounts, require SASL authentication, and set up fail2ban to block repeated attempts to hack your mailboxes. Don’t run anything else on the same server (or use virtual machines or strong containers) to reduce the chance of your mail server getting compromised other ways, and you should be good to go.