[linux-lvm] Removing disk from raid LVM
libor.klepac at bcom.cz
Mon Mar 9 11:21:15 UTC 2015
web have 4x3TB disks in LVM for backups and I setup per customer/per "task"
LV type raid5 on it.
Last week, smartd started to alarm us, that one of the disk will soon go away.
So we shut down the computer, replaced disk and then i used vgcfgrestore on
new disk to restore metadata.
Result was, that some LVs came up with damaged filesystem, some didn't came
up at all with messages like (one of rimage and rmeta was "wrong", when i used
KVPM util, it was type "virtual"
[123995.826650] mdX: bitmap initialized from disk: read 4 pages, set 1 of
[124071.037501] device-mapper: raid: Failed to read superblock of device at
[124071.055473] device-mapper: raid: New device injected into existing array
without 'rebuild' parameter specified
[124071.055969] device-mapper: table: 253:83: raid: Unable to assemble array:
[124071.056432] device-mapper: ioctl: error adding target to table
After that, i tried several combinations of
lvchange -ay --resync
Without success. So i saved some data and than created new empty LV's and
started backups from scratch.
Today, smartd alerted on another disk.
So how can i safely remove disk from VG?
I tried to simulate it on VM
echo 1 > /sys/block/sde/device/delete
vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
vgreduce --removemissing --force vgPecDisk2 #works, alerts me about rimage
and rmeta LVS
vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage and rmeta LV
So how to safely remove soon-to-be-bad-drive and insert new drive to array?
Server has no more physical space for new drive, so we cannot use pvmove.
Server is debian wheezy, but kernel is 2.6.14.
Lvm is in version 2.02.95-8 , but i have another copy i use for raid operations,
which is in version 2.02.104
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the linux-lvm