[linux-lvm] Removing disk from raid LVM

emmanuel segura emi2fast at gmail.com
Tue Mar 10 09:23:08 UTC 2015


echo 1 > /sys/block/sde/device/delete #this is wrong from my point of
view, you need first try to remove the disk from lvm

vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove

vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove

vgreduce --removemissing --force vgPecDisk2 #works, alerts me about
rimage and rmeta LVS


1: remove the failed device physically not from lvm and insert the new device
2: pvcreate --uuid "FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk"
--restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1 #to create the pv
with OLD UUID of remove disk
3: now you can restore the vg metadata


https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/mdatarecover.html

2015-03-09 12:21 GMT+01:00 Libor Klepáč <libor.klepac at bcom.cz>:
> Hello,
>
> web have 4x3TB disks in LVM for backups and I setup per customer/per "task"
> LV type raid5 on it.
>
> Last week, smartd started to alarm us, that one of the disk will soon go
> away.
>
> So we shut down the computer, replaced disk and then i used vgcfgrestore on
> new disk to restore metadata.
>
> Result was, that some LVs came up with damaged filesystem, some didn't came
> up at all with messages like (one of rimage and rmeta was "wrong", when i
> used KVPM util, it was type "virtual"
>
> ----
>
> [123995.826650] mdX: bitmap initialized from disk: read 4 pages, set 1 of
> 98312 bits
>
> [124071.037501] device-mapper: raid: Failed to read superblock of device at
> position 2
>
> [124071.055473] device-mapper: raid: New device injected into existing array
> without 'rebuild' parameter specified
>
> [124071.055969] device-mapper: table: 253:83: raid: Unable to assemble
> array: Invalid superblocks
>
> [124071.056432] device-mapper: ioctl: error adding target to table
>
> ----
>
> After that, i tried several combinations of
>
> lvconvert --repair
>
> and
>
> lvchange -ay --resync
>
>
>
> Without success. So i saved some data and than created new empty LV's and
> started backups from scratch.
>
>
>
> Today, smartd alerted on another disk.
>
> So how can i safely remove disk from VG?
>
> I tried to simulate it on VM
>
>
>
> echo 1 > /sys/block/sde/device/delete
>
> vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
>
> vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
>
> vgreduce --removemissing --force vgPecDisk2 #works, alerts me about rimage
> and rmeta LVS
>
>
>
> vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage and rmeta LV
> show up.
>
>
>
> So how to safely remove soon-to-be-bad-drive and insert new drive to array?
>
> Server has no more physical space for new drive, so we cannot use pvmove.
>
>
>
> Server is debian wheezy, but kernel is 2.6.14.
>
> Lvm is in version 2.02.95-8 , but i have another copy i use for raid
> operations, which is in version 2.02.104
>
>
>
> With regards
>
> Libor
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



-- 
esta es mi vida e me la vivo hasta que dios quiera




More information about the linux-lvm mailing list