[linux-lvm] How to trash a broke VG

Roger Heflin rogerheflin at gmail.com
Fri Aug 3 13:18:29 UTC 2018


Assuming you want to completely eliminate the vg so that you can
rebuild it from scratch and the lv's are no longer mounted, then this
should work:  IF the lv is mounted you should remove it from fstab and
reboot, and see what state it comes up in and first attempt to
vgchange it off as that is cleaner than doing the dmsetup tricks.

If you cannot get the lv's lvchanged to off such that the
/dev/<vgname> is empty or non-existant, then this is a lower level way
(this still requires the device to be un-mounted, if mounted the
command will fail).

dmsetup table | grep <vgname>

Then dmsetup remove <lvnamefromabove>  (until all component lv's are
removed, this should empty the /dev/vgname/ directory of all devices.

Once in this state you can use the pvremove command with the extra
force options, it will tell you what vg it was part of and require you
to answer y or n.

I have had to do this a number of times when events have happened
causing disks to be lost/died/corrupted.



On Fri, Aug 3, 2018 at 12:21 AM, Jeff Allison
<jeff.allison at allygray.2y.net> wrote:
> OK Chaps I've broken it.
>
> I have a VG containing one LV and made up of 3 live disks and 2 failed disks.
>
> Whilst the disks were failing I attempted to move date off the failing
> disks, which failed so I now have a pvmove0 that won't go away either.
>
>
> So if I attempt to even remove a live disk I get an error.
>
> [root at nas ~]# vgreduce -v vg_backup /dev/sdi1
>     Using physical volume(s) on command line.
>     Wiping cache of LVM-capable devices
>     Wiping internal VG cache
>   Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
>   Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
>     There are 2 physical volumes missing.
>   Cannot change VG vg_backup while PVs are missing.
>   Consider vgreduce --removemissing.
>     There are 2 physical volumes missing.
>   Cannot process volume group vg_backup
>   Failed to find physical volume "/dev/sdi1".
>
> Then if I attempt a vgreduce --removemissing I get
>
> [root at nas ~]# vgreduce --removemissing vg_backup
>   Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
>   Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
>   WARNING: Partial LV lv_backup needs to be repaired or removed.
>   WARNING: Partial LV pvmove0 needs to be repaired or removed.
>   There are still partial LVs in VG vg_backup.
>   To remove them unconditionally use: vgreduce --removemissing --force.
>   Proceeding to remove empty missing PVs.
>
> So I try force
> [root at nas ~]# vgreduce --removemissing --force vg_backup
>   Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
>   Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
>   Removing partial LV lv_backup.
>   Can't remove locked LV lv_backup.
>
> So no go.
>
> If I try lvremove pvmove0
>
> [root at nas ~]# lvremove -v pvmove0
>     Using logical volume(s) on command line.
>     VG name on command line not found in list of VGs: pvmove0
>     Wiping cache of LVM-capable devices
>   Volume group "pvmove0" not found
>   Cannot process volume group pvmove0
>
> So Heeelp I seem to be caught in some kind of loop.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




More information about the linux-lvm mailing list