[linux-lvm] Removing disk from raid LVM

Libor Klepáč libor.klepac at bcom.cz
Tue Mar 10 09:34:26 UTC 2015


Hi,
thanks for the link.
I think, this procedure was used last week, maybe i read it on this very page.

Shutdown computer, replace disk, boot computer, create PV with old uuid, then 
do vgcfgrestore.

This "echo 1 > /sys/block/sde/device/delete" is what i test now in virtual 
machine, it's like if disk failed completly, i think LVM raid should be able to handle 
this situation ;)

With regards,
Libor

On Út 10. března 2015 10:23:08 emmanuel segura wrote:
> echo 1 > /sys/block/sde/device/delete #this is wrong from my point of
> view, you need first try to remove the disk from lvm
> 
> vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
> 
> vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
> 
> vgreduce --removemissing --force vgPecDisk2 #works, alerts me about
> rimage and rmeta LVS
> 
> 
> 1: remove the failed device physically not from lvm and insert the new
> device 2: pvcreate --uuid "FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk"
> --restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1 #to create the pv
> with OLD UUID of remove disk
> 3: now you can restore the vg metadata
> 
> 
> 
https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/mdatare
cov
> er.html
> 2015-03-09 12:21 GMT+01:00 Libor Klepáč <libor.klepac at bcom.cz>:
> > Hello,
> > 
> > web have 4x3TB disks in LVM for backups and I setup per customer/per
> > "task"
> > LV type raid5 on it.
> > 
> > Last week, smartd started to alarm us, that one of the disk will soon go
> > away.
> > 
> > So we shut down the computer, replaced disk and then i used vgcfgrestore
> > on
> > new disk to restore metadata.
> > 
> > Result was, that some LVs came up with damaged filesystem, some didn't
> > came
> > up at all with messages like (one of rimage and rmeta was "wrong", when i
> > used KVPM util, it was type "virtual"
> > 
> > ----
> > 
> > [123995.826650] mdX: bitmap initialized from disk: read 4 pages, set 1 of
> > 98312 bits
> > 
> > [124071.037501] device-mapper: raid: Failed to read superblock of device
> > at
> > position 2
> > 
> > [124071.055473] device-mapper: raid: New device injected into existing
> > array without 'rebuild' parameter specified
> > 
> > [124071.055969] device-mapper: table: 253:83: raid: Unable to assemble
> > array: Invalid superblocks
> > 
> > [124071.056432] device-mapper: ioctl: error adding target to table
> > 
> > ----
> > 
> > After that, i tried several combinations of
> > 
> > lvconvert --repair
> > 
> > and
> > 
> > lvchange -ay --resync
> > 
> > 
> > 
> > Without success. So i saved some data and than created new empty LV's 
and
> > started backups from scratch.
> > 
> > 
> > 
> > Today, smartd alerted on another disk.
> > 
> > So how can i safely remove disk from VG?
> > 
> > I tried to simulate it on VM
> > 
> > 
> > 
> > echo 1 > /sys/block/sde/device/delete
> > 
> > vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
> > 
> > vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
> > 
> > vgreduce --removemissing --force vgPecDisk2 #works, alerts me about 
rimage
> > and rmeta LVS
> > 
> > 
> > 
> > vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage and rmeta
> > LV show up.
> > 
> > 
> > 
> > So how to safely remove soon-to-be-bad-drive and insert new drive to
> > array?
> > 
> > Server has no more physical space for new drive, so we cannot use 
pvmove.
> > 
> > 
> > 
> > Server is debian wheezy, but kernel is 2.6.14.
> > 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20150310/e2634a6d/attachment.htm>


More information about the linux-lvm mailing list