[linux-lvm] Removing disk from raid LVM

Libor Klepáč libor.klepac at bcom.cz
Wed Mar 11 13:05:37 UTC 2015


Hello John,

On Út 10. března 2015 10:05:38 John Stoffel wrote:
> Libor> web have 4x3TB disks in LVM for backups and I setup per
> Libor> customer/per "task" LV type raid5 on it.
> 
> Can you post the configuration details please, since they do matter.
> It would seem to me, that it would be better to use 'md' to create the
> underlying RAID5 device, and then use LVM on top of that /dev/md0 to
> create the customer LV(s) as needed.

I used mdraid everytime before (in fact, OS is on another disks on mdraid). But i 
really loved idea/flexibility of raid in LVM and wanted to try it.

> 
> Libor> Last week, smartd started to alarm us, that one of the disk
> Libor> will soon go away.
> 
> Libor> So we shut down the computer, replaced disk and then i used
> Libor> vgcfgrestore on new disk to restore metadata.
> 
> You should have shutdown the system, added in a new disk, and then
> rebooted the system.  At that point you would add the new disk into
> the RAID5, and then fail the dying disk.  It would be transparent to
> the LVM setup and be much safer.
> 

I see, but there is no physical space for extra disk. Maybe external disk should 
do the trick, but it would take hours to migrate data and server is in remote 
housing facility.

> I'd also strongly advise you to get RAID6 setup and have a hot spare
> also setup, so that you don't have this type of issue in the future.
> 
> Libor> Result was, that some LVs came up with damaged filesystem, some
> Libor> didn't came up at all with messages like (one of rimage and
> Libor> rmeta was "wrong", when i used KVPM util, it was type "virtual"
> 
> This sounds very much like you just lost a bunch of data, which RAID5
> shouldn't do.  So please post the details of your setup, starting at
> the disk level and moving up the stack to the filesystem(s) you have
> mounted for backups.  We don't need the customer names, etc, just the
> details of the system.
> 

System is Dell T20.

Backup disks are connected over 
00:1f.2 SATA controller: Intel Corporation Lynx Point 6-port SATA Controller 1 
[AHCI mode] (rev 04)
System disks are connected over
04:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 
6Gb/s Controller (rev 10)

First three are 3TB sata discs, 3,5'', 7200RPM
[0:0:0:0]    disk    ATA      TOSHIBA MG03ACA3 n/a   /dev/sda 
[1:0:0:0]    disk    ATA      ST3000NM0033-9ZM n/a   /dev/sdb 
[2:0:0:0]    disk    ATA      ST3000NM0033-9ZM n/a   /dev/sdg 
[3:0:0:0]    disk    ATA      TOSHIBA MG03ACA3 n/a   /dev/sdd

Remaining two are 500GB 2,5'' disks for system
[6:0:0:0]    disk    ATA      ST9500620NS      n/a   /dev/sde 
[8:0:0:0]    disk    ATA      ST9500620NS      n/a   /dev/sdf 

System is on mdraid (raid1) + LVM

On top of LVs, we use ext4 for OS and XFS for backup/customer disks.

> Also, which version of lvm, md, linux kernel, etc are you using?  The
> more details the better.

It's Debian Wheezy, with kernel 3.14(.14)
System LVM is
 LVM version:     2.02.95(2) (2012-03-06)
  Library version: 1.02.74 (2012-03-06)
  Driver version:  4.27.0

I also use another copy of lvm, for raid operations (creating LV, extending LVs, 
show progress of resync) ...
  LVM version:     2.02.104(2) (2013-11-13)
  Library version: 1.02.83 (2013-11-13)
  Driver version:  4.27.0


Should the problem be, that VG/LVs are first constructed using system old utils?
I think, i could upgrade whole system to Debian Jessie as last resort operation.
This should bring kernel to version 3.16 and lvm to 2.02.111

Thanks for your reply

With regards,
Libor




> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20150311/4be6e569/attachment.htm>


More information about the linux-lvm mailing list