

That’s what I do, too. My Dovecot is at home and I collect emails from all my accounts using fetchmail.
Nice thing is Dovecot Pidgeonhole for Sieve and Flatcurve for ultrafast indexing and search.
That’s what I do, too. My Dovecot is at home and I collect emails from all my accounts using fetchmail.
Nice thing is Dovecot Pidgeonhole for Sieve and Flatcurve for ultrafast indexing and search.
Yeah, haha. 😂
Wait a moment… 🤔
2 HDDs (mirrored zpool), 1 SATA SSD for cache, 32 GB RAM
First read: 120 MB/s
Read while fully cached (obviously in RAM): 4.7 GB/s
My Dovecot is still 2.3.21. It’s the most recent package. But you’re right the update doesn’t look trivial.
With an IMAP server you have the power to serve your own emails. fetchmail will deliver them constantly to this place. You have all emails in one place and you login just to your own server.
Basically what I said. Dovecot can be installed on every Linux or BSD system. You’ll need Pideonhole and FTS Flatcurve as extensions.
When you install fetchmail, you can let it connect to all your IMAP or POP3 servers. Each process will deliver your mail instantly to your own Dovecot server.
You’ll also need a certificate for Dovecot. This can be solved using Letsencrypt.
You can use any mail client you want. I use Fairemail on Android. On desktop and notebooks it’s Thunderbird.
I sync my emails using a Dovecot IMAP server on my home server. I fetch emails from all my accounts with fetchmail and sort them into the right folders using Sieve. They get indexed and are searchable (ultra fast!).
225000 emails in 13 GB (ZFS; uncompressed 18 GB).
If they cannot manage their own infrastructure, they also don’t know what infrastructure is needed for their services. And they won’t even have the opportunity to learn anymore.
Secondly, if you buy external services, you need to consider improving connectivity.
I mean, you can still work on your on-premises servers, if your internet connection fails. You cannot, if you outsourced essentials parts.
Ok. Thanks. The drive looks fine. The 33% health seems to come from the average block erase count. This is the most expensive operation on SSDs.
Why does it increase faster? Because blocks are written partially. Worst case is that if you write 1 Byte to a block and then 1 Byte into same block, it would need 1 block erase (usually a block is 128 kB, not 4 kB like HDDs have).
Your SSD is very busy. You should review what is going on on your system.
Can you try to get smartmontools and show the output of smartctl
? Health could be a combination of multiple values.
This is an old PC (Intel i7 3770K) with 2 HDDs (16 TB) attached to onboard SATA3 controller, 16 GB RAM and 1 SSD (120 GB). Nothing special. And it’s quite busy because it’s my home server with a VM and containers.
The question is how do you get a bad performance with ZFS?
I just tried to read a large file and it gave me uncached 280 MB/s from two mirrored HDDs.
The fourth run (obviously cached) gave me over 3.8 GB/s.
Well, I’d need to repeat my comment below yours again.
Thanks for that. I hate people who leave out important information and context. They are evil.
There is nothing to refurbish in drives. They are just second hand devices. You can check if they are fine pretty easy and you need to take a look at the age (power on hours). I replace drives at 50k-60k hours, no matter if they are fine.
I only know pasta with sugar and:
Are you looking for something like cached credentials?
Some hard drives are built for 24/7 operation. They have higher MTBF ratings and longer guarantees.
Hard drives are very different. Many of them waste energy, lie in the SMART log or just are weird (spin up and down, lose speed, get incredibly hot etc.)
So it’s the good old client certificate authentication?