[linux-lvm] Removing disk from raid LVM

Premchand Gupta gpremchand at gmail.com
Wed Mar 11 23:12:24 UTC 2015


Hi ,

if software raid is configure over the lvm, then follow below step.
step 1) fail and remove the device from raid 5 with command mdamd
step 2) removed disk form lv, vg then pv
step 3) delete it from system with echo command







*Thanks & RegardsPremchand S. Gupta09820314487*

On Mon, Mar 9, 2015 at 4:51 PM, Libor Klepáč <libor.klepac at bcom.cz> wrote:

>  Hello,
>
> web have 4x3TB disks in LVM for backups and I setup per customer/per
> "task" LV type raid5 on it.
>
> Last week, smartd started to alarm us, that one of the disk will soon go
> away.
>
> So we shut down the computer, replaced disk and then i used vgcfgrestore
> on new disk to restore metadata.
>
> Result was, that some LVs came up with damaged filesystem, some didn't
> came up at all with messages like (one of rimage and rmeta was "wrong",
> when i used KVPM util, it was type "virtual"
>
> ----
>
> [123995.826650] mdX: bitmap initialized from disk: read 4 pages, set 1 of
> 98312 bits
>
> [124071.037501] device-mapper: raid: Failed to read superblock of device
> at position 2
>
> [124071.055473] device-mapper: raid: New device injected into existing
> array without 'rebuild' parameter specified
>
> [124071.055969] device-mapper: table: 253:83: raid: Unable to assemble
> array: Invalid superblocks
>
> [124071.056432] device-mapper: ioctl: error adding target to table
>
> ----
>
> After that, i tried several combinations of
>
> lvconvert --repair
>
> and
>
> lvchange -ay --resync
>
>
>
> Without success. So i saved some data and than created new empty LV's and
> started backups from scratch.
>
>
>
> Today, smartd alerted on another disk.
>
> So how can i safely remove disk from VG?
>
> I tried to simulate it on VM
>
>
>
> echo 1 > /sys/block/sde/device/delete
>
> vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
>
> vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
>
> vgreduce --removemissing --force vgPecDisk2 #works, alerts me about rimage
> and rmeta LVS
>
>
>
> vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage and rmeta
> LV show up.
>
>
>
> So how to safely remove soon-to-be-bad-drive and insert new drive to array?
>
> Server has no more physical space for new drive, so we cannot use pvmove.
>
>
>
> Server is debian wheezy, but kernel is 2.6.14.
>
> Lvm is in version 2.02.95-8 , but i have another copy i use for raid
> operations, which is in version 2.02.104
>
>
>
> With regards
>
> Libor
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20150312/c0c8ea58/attachment.htm>


More information about the linux-lvm mailing list