[linux-lvm] alternative to pvmove on root volume

malahal at us.ibm.com malahal at us.ibm.com
Thu Jan 28 21:17:20 UTC 2010

chris procter [chris-procter at talk21.com] wrote:
> (resent, it didn't seem to come through last time)
> Hi,
> I'm trying to migrate our servers from an old EVA to a
> shiney new netapp san if possible without downtime. For most of the
> volumes I can present in luns from the new san and use pvmove to juggle
> the data around but several servers have the root volume on the EVA and
> pvmove has a nasty habit of deadlocking the machine when used on root
> volumes.
> I've been working on the technique mentioned on http://sources.redhat.com/lvm2/wiki/FrequentlyAskedQuestions   but after a bit of thought it seems it might be better to do the following:
> 0) add /dev/new_lun to the volume group
> 1) lvconvert -m 1 /dev/myvg/lvol00 /dev/new_lun
> 2) wait for the mirror to sync
> now we have RAID1 mirror copy of lvol00 on /dev/old_lun and /dev/new_lun so:
> 3) lvconvert -m 0 /dev/myvg/lvol00 /dev/old_lun
> Breaks
> the RAID in favour of new_lun and gets rid of the old leaving us with a
> basic (non-mirrored) linear lvol entire on the new_lun.
> 4) Rinse and repeat for all the other lvols on old_lun   (which you can get from "dmsetup table")
> 5) vgreduce myvg /dev/old_lun
> Its less elegant then pvmove but my initial testing seems to suggest it does actually work and doesn't cause deadlocks.

Have you tried pvmove for the same and found deadlocks? If not, your
test doesn't mean anything!

> However
> given that pvmove also works by mirroring I'm not convinced I haven't
> just been lucky so far .  So does anyone have any ideas or even better
> experiance on whether this is likely to work or am I setting myself up
> for a world of pain if I try it on a live server?

Yes, pvmove works very similar. It will create a mirror for each segment
one at a time, so it may have to create lot more mirrors depending on
your configuration. If it ends up needing more 'lvconverts' (suspends),
then the probability of failure (deadlock) will increase.


More information about the linux-lvm mailing list