[linux-lvm] LVM corruption/diagnosis

Radu Rendec radu.rendec at mindbit.ro
Thu Apr 7 05:47:44 UTC 2011


On Thu, 2011-04-07 at 14:06 +1200, Jan Bakuwel wrote:
> Problem solved. It was my brain mixing /dev/d/ and /dev/mapper.
> Releasing the partition device with kpartx -d worked - as long as I use
> the correct path and not mix the VG name with "mapper".
>
> Radu: the first test I'll do is not to zero the partition but to restore
> the image now the partition device (/dev/d/xm.wxp1) is gone. I don't
> understand why it's there in the first place (dom0 has no business
> there). If that helps, the presence of that partition device apparently
> interferes with the VM. If that doesn't help, I'll zero the blocks and
> report back (some time next week).

I don't think that mapping the partitions with kpartx could affect the
VM (that reads/writes to the LV directly).

But what I know for sure is that when you map a block device with
kpartx, the "partition" devices that kpartx creates under /dev/mapper
have different read/write caches than the original block device (the LV
in your case).

One issue that I experienced is that when you write data to a kpartx
mapped device (partition) and some (or all) of the blocks that you write
happen to be in the read cache of the original block device (the LV),
then you'll read "old" data from the LV, even if you first unmap the
partitions with kpartx -d.

This issue can be simply addressed by using "blockdev --flushbufs" on
the LV, after you do "kpartx -d" and before you use the LV (start the VM
for instance).

What type of image are you restoring? The whole LV (including its
partition table) or just the partition inside the LV (perhaps with
ntfsclone)? Because if you're restoring the partition (and not using
"kpartx -d" and "blockdev --flushbufs", it's very likely that you ran
into caching issues.

Best regards,

Radu





More information about the linux-lvm mailing list