A person with way too many hobbies, but I still continue to learn new things.

  • 1 Post
  • 15 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle

  • But is it decentralized? Do the results from multiple spiders get added to give everyone the same quality searches or do I need to scan the whole internet myself?

    [edit] I was looking at this earlier and couldn’t find the info. Started searching again just now and found it immediately… of course… (The answer is YES)


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldDecentralized Search Engine
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    7 days ago

    Yep, that’s exactly what I was looking at (https://github.com/searx/searx). As I said, it was a QUICK dive but the wording was enough to make me shy away from it. For all the years I’ve been running servers, I won’t put up anything that requires the latest/greatest of any code because that’s where about 90% of the zero-days seem to come from. Almost all the big ones I’ve seen in the last few years where things that made me panic until I realized that oh, if your updates are more than a year old then none of this affects you. And the one that DID affect me had already been updated through a security release.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldDecentralized Search Engine
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    8 days ago

    I just did a quick dive into this and have some concerns. SearX appears to no longer be maintained and was last updated three years ago. SearXNG was forked to use more recent libraries but there were concerns that those are not always stable or fully vetted. There were also concerns that SearXNG did not follow the same concerns for user privacy. It’s a shame that SearX shut down, that one actually sounds like a project I would have jumped on.


  • More drives also equals larger power consumption so you would need a larger battery backup.

    It also means more components prone to failure which increases your chance of losing data. More drives means more moving parts and electrical connections including data and power cables, backplanes, and generated heat that you need to cool down.

    I’d be more concerned over how many failures you’re seeing that makes you think smaller drives would be the better option? I have historically used old drives from ebay or manufacturer refurbs, and even the worst of those have been reliable enough to only have to replace drives once every year or two. With RAID6 or raidz2 you should be plenty secure during a rebuild to prevent data loss. I wouldn’t consider using a lot of little drives unless it’s the only option I had or if someone gave them away for free.



  • Are you sure about that? Ever hear about this supposed predictable network names in recent linux versions? Yeah those can change too. I was trying to set up a new firewall with two internal NICs plus a 4-port card, and they kept moving around. I finally figured out that if I cold-booted the NICs would come up in one order, and if I warm-booted they would come up in a completely different order (like the ports on the card would reverse which order they were detected). This was completely the fault of systemd because when I installed an older linux and used udev to map the ports, it worked exactly as predicted. These days I trust nothing.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldHelp with ZFS Array
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    4 months ago

    OP – if your array is in good condition (and it looks like it is) you have an option to replace drives one by one, but this will take some time (probably over a period of days). The idea is to remove a disk from the pool by its old name, then re-add the disk under the corrected name, wait for the pool to rebuild, then do the process again with the next drive. Double-check, but I think this is the proper procedure…

    zpool offline poolname /dev/nvme1n1p1

    zpool replace poolname /dev/nvme1n1p1 /dev/disk/by-id/drivename

    Check zpool status to confirm when the drive is done rebuilding under the new name, then move on to the next drive. This is the process I use when replacing a failed drive in a pool, and since that one drive is technically in a failed state right now, this same process should work for you to transfer over to the safe names. Keep in mind that this will probably put a lot of strain on your drives since the contents have to be rebuilt (although there is a small possibility zfs may recognize the drive contents and just start working immediately?), so be prepared in case a drive does actually fail during the process.


  • That is definitely true of zfs as well. In fact I have never seen a guide which suggests anything other than using the names found under /dev/disk/by-id/ or /dev/disk/by-id/uuid and that is to prevent this very problem. If the proper convention is used then you can plug the drives in through any available interface, in any order, and zfs will easily re-assemble the pool at boot.

    So now this begs the question… is proxmox using some insane configuration to create drive clusters using the name they happen to boot up with???






  • I think I missed something in your description, but what are you running on your local server? I think most people set up postfix to relay the emails over to gmail or whoever, and there are options in postfix for backwards compatibility with Outlook or even Microsoft Mail so your wife could use whatever client she wants. If you don’t have a local mail server set up then this is probably what you want to do. This method allow a local or remote connection from any client so you could run K9 on your phone instead of a VPN.

    For opening such a setup to the internet (and allowing access from anywhere), make sure you have strong passwords on your accounts, require SASL authentication, and set up fail2ban to block repeated attempts to hack your mailboxes. Don’t run anything else on the same server (or use virtual machines or strong containers) to reduce the chance of your mail server getting compromised other ways, and you should be good to go.