[linux-lvm] Removing disk from raid LVM
john at stoffel.org
Tue Mar 10 14:05:38 UTC 2015
Libor> web have 4x3TB disks in LVM for backups and I setup per
Libor> customer/per "task" LV type raid5 on it.
Can you post the configuration details please, since they do matter.
It would seem to me, that it would be better to use 'md' to create the
underlying RAID5 device, and then use LVM on top of that /dev/md0 to
create the customer LV(s) as needed.
Libor> Last week, smartd started to alarm us, that one of the disk
Libor> will soon go away.
Libor> So we shut down the computer, replaced disk and then i used
Libor> vgcfgrestore on new disk to restore metadata.
You should have shutdown the system, added in a new disk, and then
rebooted the system. At that point you would add the new disk into
the RAID5, and then fail the dying disk. It would be transparent to
the LVM setup and be much safer.
I'd also strongly advise you to get RAID6 setup and have a hot spare
also setup, so that you don't have this type of issue in the future.
Libor> Result was, that some LVs came up with damaged filesystem, some
Libor> didn't came up at all with messages like (one of rimage and
Libor> rmeta was "wrong", when i used KVPM util, it was type "virtual"
This sounds very much like you just lost a bunch of data, which RAID5
shouldn't do. So please post the details of your setup, starting at
the disk level and moving up the stack to the filesystem(s) you have
mounted for backups. We don't need the customer names, etc, just the
details of the system.
Also, which version of lvm, md, linux kernel, etc are you using? The
more details the better.
More information about the linux-lvm