LVM negates benefits of jounaling filesystems? [was RFE: autofsck]

Yaakov Nemoy loupgaroublond at gmail.com
Wed Jun 11 21:26:38 UTC 2008


2008/6/11 Callum Lerwick <seg at haxxed.com>:
> On Tue, Jun 10, 2008 at 9:53 AM, Eric Sandeen <sandeen at redhat.com> wrote:
>>
>> I think the problem is, barriers are really implemented as drive cache
>> flushes.  So under certain workloads the performance really does hurt.
>> But if the alternative is a good chance of filesystem corruption on
>> power loss, remind me again why we run a journaling fileystem at all?  :)
>>
>> If ext3 doesn't get barriers on by default upstream then I would suggest
>> that yes, we should patch the kernel ourselves to do so, or set the
>> installer to create fstabs which specify it for filesystems that don't
>> have barriers on by default.  ... after lvm stops filtering it out,
>> anyway.
>
> I would like to put in my +1 for this. Performance is pointless on if you
> can not trust that your data is safe. I have on many occasions run fscks on
> my supposedly clean ext3 filesystems, only to find some mild corruption. How
> can this happen? Isn't journaling supposed to prevent this? One day I ran a
> fsck before doing some filesystem resizing, only to find one of my
> irreplacible personal photos had become corrupted. I had no way to know when
> or why this file got corrupted, it had been written to disk some time ago
> and never touched since. I trusted journaling, and it failed me. (Yes, I
> have a backup. I think...) After this, I now turn on autofsck on all my
> machines, so that corruption at least can't go undetected for years. Which
> means after a power fail it takes my primary desktop with a pretty full
> 250gb drive 20-30 minutes to come back up, which is incredibly irritating,
> but I have to know my data is safe. I've even picked up a habit of
> obsessively checksumming all my really important files. I wish the
> filesystem would help do this for me. (ZFS...)
>
> Knowing is half the battle. See, what can happen here, is a file can get
> corrupted, and I may not notice until years later. By then I may have cycled
> through several full backups, and long since lost the backup I did have of
> the file...
>
> This must be fixed. Only through a long painful process of losing faith have
> I learned to not trust my filesystems. I suspect there are many others out
> there who have been bitten by filesystem corruption and just don't know it
> yet.
>
> Only now do I learn the likely reason for this corruption. How would I have
> reported this? I just assumed it was hardware glitches.

Journaling won't save you from this problem, necessarily.  What you
gain from journaling is when your process aborts half way through
because of any failure at all.  The integrity you get is not that
every piece of data is intact and correct, but that no bit is in a
transitory state.  This means any transaction, or change of state from
A to B, that affects more than one bit on the hard drive, is a
concrete, isolated and discrete change.  This is the only guarantee
you get from a journaled file system.

This means, your system crashes while doing a banking transaction, you
don't lose money without a record being made to whom you paid it to.
It means that while rotating a photo, your power goes out, you don't
have half a rotate photo saved on your hard drive.  It means that
there are no lost blocks on your hard drive of files that didn't save
while doing something very critical.  File copy operations are either
completed or not.

You don't gain long term storage.  You do gain transaction security.

-Yaakov




More information about the fedora-devel-list mailing list