Reserved for future use: krex & krox
Poor “kryx”.
Reserved for future use: krex & krox
Poor “kryx”.
Three-letter words that can be typed with one hand, since I have to type them frequently.
$ egrep "^([qwertasdfgzxcvb]{3}|[yuiophjkllnm]{3})$" /usr/share/dict/words
https://biologydictionary.net/snow-leopard/
Snow leopards are apex predators, which means that they sit at the top of the food chain and have no natural predators themselves.
On the other hand, that’s “natural” predators, which explicitly excludes humans. We have our own unique tier in the chain.
https://en.wikipedia.org/wiki/Snow_leopard
The snow leopard is easily driven away from livestock and readily abandons kills, often without defending itself.[31] Only two attacks on humans have been reported, both near Almaty in Kazakhstan, and neither were fatal. In 1940, a rabid snow leopard attacked two men; and an old, toothless emaciated individual attacked a person passing by.[53][54]
Hundreds of endangered wild snow leopards are killed each year
As many as 450 endangered snow leopards have been killed each year since 2008, a report on the fate of the mountain cats estimates.
A big surprise is that more than half the killings – 55 per cent – are estimated to be done by herders avenging livestock attacks by leopards, with only 21 per cent of the cats taken by poachers.
Only 4000 to 7000 of the animals are thought to remain in the 12 mountainous Asian countries they inhabit.
That is, we annually kill something like 3%-10% of the global snow leopard population. They have never been known to kill a human, and have very rarely attacked one.
I’m not entirely enthusiastic about the fact that the maintaining GitHub account is using an avatar that appears to be a flaming Pepe.
The first state listed is California.
https://en.wikipedia.org/wiki/California_state_tartan
The California state tartan is the official Scottish Tartan pattern of California, created July 23, 2001 and defined under law in California Government Code § 424.3(a). California State Assembly Member Helen MacLeod Thomson wrote the law.[1] The tartan was designed by J. Howard Standing of Tarzana, California, and Thomas Ferguson, Sydney, British Columbia.[2] Any resident of the state may claim the tartan and the design as described in state law states that the tartan is a pattern or sett consisting of alternate squares of meadow-green and Pacific blue that are separated and surrounded by narrow charcoal bands.
There are only 5.4 million people in Scotland.
The very first state entry here assigns an official Scottish Tartan to 39.4 million people.
I also notice that there are apparently Australian and Canadian regional tartans.
you can’t follow people
He said “Lemmy” but probably meant “Threadiverse”, and mbin does support both the Twitter-style following user model and the Reddit-style forum model.
To use fedia.io as an example:
I dunno about piefed, haven’t used it.
Not a very large userbase.
I was skeptical, but the New York Times agrees.
https://www.nytimes.com/2022/01/27/world/asia/china-fight-club-ending.html
Now I’m kind of wondering how many anti-authoritarian movies out there have altered Chinese versions.
and uses btrfs send/receive to create backups.
I’m not familiar with that, but if it permits for faster identification of modified data since a given time than scanning a filesystem for modified files, which a filesystem could potentially do, that could also be a useful backup enabler, since now your scan-for-changes time doesn’t need to be linear in the number of files in the filesystem. If you don’t do that, your next best bet on Linux – and this way would be filesystem-agnostic – is gonna require something like having a daemon that runs and uses inotify to build some kind of on-disk index of modifications since the last backup, and a backup system that can understand that.
looks at btrfs-send(1) man page
Ah, yeah, it does do that. Well, the man page doesn’t say what time it runs in, but I assume that it’s better than linear in file count on the filesystem.
You’re correct and probably the person you’re responding to is treating one as an alternative as another.
However, theoretically filesystem snapshotting can be used to enable backups, because they permit for an instantaneous, consistent view of a filesystem. I don’t know if there are backup systems that do this with btrfs today, but this would involve taking a snapshot and then having the backup system backing up the snapshot rather than the live view of the filesystem.
Otherwise, stuff like drive images and database files that are being written to while being backed up can just have a corrupted, inconsistent file in the backup.
Wouldnt the sync option also confirm that every write also arrived on the disk?
If you’re mounting with the NFS sync option, that’ll avoid the “wait until close and probably reorder writes at the NFS layer” issue I mentioned, so that’d address one of the two issues, and the one that’s specific to NFS.
That’ll force each write to go, in order, to the NFS server, which I’d expect would avoid problems with the network connection being lost while flushing deferred writes. I don’t think that it actually forces it to nonvolatile storage on the server at that time, so if the server loses power, that could still be an issue, but that’s the same problem one would get when running with a local filesystem image with the “less-safe” options for qemu and the client machine loses power.
NFS doesn’t do snapshotting, which is what I assumed that you meant and I’d guess ShortN0te also assumed.
If you’re talking about qcow2 snapshots, that happens at the qcow2 level. NFS doesn’t have any idea that qemu is doing a snapshot operation.
On a related note: if you are invoking a VM using a filesystem images stored on an NFS mount, I would be careful, unless you are absolutely certain that this is safe for the version of NFS and the specific caching options for both NFS and qemu that you are using.
I’ve tried to take a quick look. There’s a large stack involved, and I’m only looking at it quickly.
To avoid data loss via power loss, filesystems – and thus the filesystem images backing VMs using filesystems – require write ordering to be maintained. That is, they need to have the ability to do a write and have it go to actual, nonvolatile storage prior to any subsequent writes.
At a hard disk protocol level, like for SCSI, there are BARRIER operations. These don’t force something to disk immediately, but they do guarantee that all writes prior to the BARRIER are on nonvolatile storage prior to writes subsequent to it.
I don’t believe that Linux has any userspace way for an process to request a write barrier. There is not an fwritebarrier()
call. This means that the only way to impose write ordering is to call fsync()/sync() or use similar-such operations. These force data to nonvolatile storage, and do not return until it is there. The downside is that this is slow. Programs that are frequently doing such synchronizations cannot issue writes very quickly, and are very sensitive to latency to their nonvolatile storage.
From the qemu(1)
man page:
By default, the cache.writeback=on mode is used. It will report data writes as completed as soon as the data is present in the host page cache. This is safe as long as your guest OS makes sure to correctly flush disk caches where needed. If your guest OS does not handle volatile disk write caches correctly and your host crashes or loses power, then the guest may experience data corruption. For such guests, you should consider using cache.writeback=off. This means that the host page cache will be used to read and write data, but write notification will be sent to the guest only after QEMU has made sure to flush each write to the disk. Be aware that this has a major impact on performance.
I’m fairly sure that this is a rather larger red flag than it might appear, if one simply assumes that Linux must be doing things “correctly”.
Linux doesn’t guarantee that a write to position A goes to disk prior to a write to position B. That means that if your machine crashes or loses power, with the default settings, even for drive images sorted on a filesystem on a local host, with default you can potentially corrupt a filesystem image.
https://docs.kernel.org/block/blk-mq.html
Note
Neither the block layer nor the device protocols guarantee the order of completion of requests. This must be handled by higher layers, like the filesystem.
POSIX does not guarantee that write() operations to different locations in a file are ordered.
https://stackoverflow.com/questions/7463925/guarantees-of-order-of-the-operations-on-file
So by default – which is what you might be doing, wittingly or unwittingly – if you’re using a disk image on a filesystem, qemu
simply doesn’t care about write ordering to nonvolatile storage. It does writes. it does not care about the order in which they hit the disk. It is not calling fsync()
or using analogous functionality (like O_DIRECT
).
NFS entering the picture complicates this further.
https://www.man7.org/linux/man-pages/man5/nfs.5.html
The sync mount option The NFS client treats the sync mount option differently than some other file systems (refer to mount(8) for a description of the generic sync and async mount options). If neither sync nor async is specified (or if the async option is specified), the NFS client delays sending application writes to the server until any of these events occur:
Memory pressure forces reclamation of system memory resources. An application flushes file data explicitly with sync(2), msync(2), or fsync(3). An application closes a file with close(2). The file is locked/unlocked via fcntl(2). In other words, under normal circumstances, data written by an application may not immediately appear on the server that hosts the file. If the sync option is specified on a mount point, any system call that writes data to files on that mount point causes that data to be flushed to the server before the system call returns control to user space. This provides greater data cache coherence among clients, but at a significant performance cost. Applications can use the O_SYNC open flag to force application writes to individual files to go to the server immediately without the use of the sync mount option.
So, strictly-speaking, this doesn’t make any guarantees about what NFS does. It says that it’s fine for the NFS client to send nothing to the server at all on write(). The only time a write() to a file makes it to the server, if you’re using the default NFS mount options. If it’s not going to the server, it definitely cannot be flushed to nonvolatile storage.
Now, I don’t know this for a fact – would have to go digging around in the NFS client you’re using. But it would be compatible with the guarantees listed, and I’d guess that probably, the NFS client isn’t keeping a log of all the write()s and then replaying them in order. If it did so, for it to meaningfully affect what’s on nonvolatile storage, the NFS server would have to fsync() the file after each write being flushed to nonvolatile storage. Instead, it’s probably just keeping a list of dirty data in the file, and then flushing it to the NFS server at close().
That is, say you have a program that opens a file filled with all ‘0’ characters, and does:
At close() time, the NFS client probably doesn’t flush “1” to position 1, then “1” to position 5000, then “2” to position 1, then “2” to position 5000. It’s probably just flushing “2” to position 1, and then “2” to position 5000, because when you close the file, that’s what’s in the list of dirty data in the file.
The thing is that unless the NFS client retains a log of all those write operations, there’s no way to send the writes to the server in a way that avoid putting the file into a corrupt state if power is lost. It doesn’t matter whether it writes the “2” at position 1 or the “2” at position 5000. In either case, it’s creating a situation where, for a moment, one of those two positions has a “0”, and the other has a “2”. If there’s a failure at that point – the server loses power, the network connection is severed – that’s the state in which the file winds up in. That’s a state that is inconsistent, should never have arisen. And if the file is a filesystem image, then the filesystem might be corrupt.
So I’d guess that at both of those two points in the stack – the NFS client writing data to the server, and the server block device scheduler, permit inconsistent state if there’s no fsync()/sync()/etc being issued, which appears to be the default behavior for qemu
. And running on NFS probably creates a larger window for a failure to induce corruption.
It’s possible that using qemu’s iSCSI backend avoids this issue, assuming that the iSCSI target avoids reordering. That’d avoid qemu going through the NFS layer.
I’m not going to dig further into this at the moment. I might be incorrect. But I felt that I should at least mention it, since filesystem images on NFS sounded a bit worrying.
Org-mode in emacs.
There are various mobile clients.
If you have something to synch files, it’s just syncing org files. Probably mostly interesting to people who use a lot of emacs on a PC, though.
No, because the DBMS is going to be designed to permit power loss in the middle of a write without being corrupted. It’ll do something vaguely like this, if you are, for example, overwriting an existing record with a new one:
Write that you are going to make a change in a way that does not affect existing data.
Perform a barrier operation (which could amount to just syncing to disk, or could just tell the OS’s disk cache system to place some restrictions on how it later syncs to disk, but in any event will ensure that all writes prior to to the barrier operation are on disk prior to those write operations subsequent to it).
Replace the existing record. This may be destructive of existing data.
Potentially remove the data written in Step 1, depending upon database format.
If the DBMS loses power and comes back up, if the data from Step #1 is present and complete, it’ll consider the operation committed, and simply continue the steps from there. If Step 1 is only partially on disk, it’ll consider it not committed and delete it, treat the commit as not having yet gone through. From the DBMS’s standpoint, either the change happens as a whole or does not happen at all.
That works fine for power loss or if a filesystem is snapshotted at an instant in time. Seeing a partial commit, as long as the DBMS’s view of the system was at an instant in time, is fine; if you start it up against that state, it will either treat the change as complete and committed or throw out an incomplete commit.
However, if you are a backup program and happily reading the contents of a file, you may be reading a database file with no synchronization, and may wind up with bits of one or multiple commits as the backup program reads the the file and the DBMS writes to it – a corrupt database after the backup is restored.
Some databases support snapshotting (which won’t take the database down), and I believe that backup systems can be aware of the DBMS. I’m not a good person to ask as to best practices, because I don’t admin a DBMS, but it’s an issue that I do mention when people are talking about backups and DBMSes – if you have one, be aware that a backup system is going to have to take into account the DBMS one way or another if you want to potentially avoid backing up a database in inconsistent state.
You need RAID
I’d say that one needs backups.
After one has backups, if it’s necessary, then I’d look at RAID for reducing downtime in the event of a drive failure.
But one doesn’t want to use RAID instead of backups.
https://serverfault.com/questions/2888/why-is-raid-not-a-backup
Why is RAID not a backup?
When someone mentions RAID in a conversation about backups, invariably someone declares that “RAID is not a backup.”
Sure, for striping, that’s true. But what’s the difference between redundancy and a backup?
RAID guards against one kind of hardware failure. There’s lots of failure modes that it doesn’t guard against.
File corruption
Human error (deleting files by mistake)
Catastrophic damage (someone dumps water onto the server)
Viruses and other malware
Software bugs that wipe out data
Hardware problems that wipe out data or cause hardware damage (controller malfunctions, firmware bugs, voltage spikes, …)
and more.
No disagreement with your broader point about a single drive ultimately being bounded in the kind of reliability that it can provide, though.
Note: If you want to backup a DBMS, you’re going to want to use some system that ensures that the backup is atomic.
and absolutely can confirm it’s very complex to setup properly.
To expand on this, dealing with anti-spam stuff is a pain. It’s easy to think that things are working fine, but then have email getting blackholed because of some anti-spam system on some specific remote system. Like, this isn’t a “the config files are complicated, but once it’s running, it’s fine” situation.
Go back to 2000 and running an email server was no big deal.
Mbin is designed to support both reasonably on one account.