[linux-lvm] lvm raid1 metadata on different pv

emmanuel segura emi2fast at gmail.com
Mon Sep 25 10:46:54 UTC 2017


I did as follow and it works, without storing the metadata on zram:

  truncate -s500M  disk1.dd
  truncate -s500M  disk2.dd
  losetup /dev/loop0 /root/disk1.dd
  losetup /dev/loop1 /root/disk2.dd

pvcreate /dev/loop0
pvcreate /dev/loop1
vgcreate vg00/dev/loop0 /dev/loop1
vgcreate vg00 /dev/loop0 /dev/loop1

lvcreate --type raid1 --name raid1vol -L 400M vg00

[root at puppetserver:~# lvs -ao +devices
  LV                  VG   Attr     LSize   Pool Origin Data%  Move Log
Copy%  Convert Devices
  raid1vol            vg00 rwi-a-m- 400.00m
100.00         raid1vol_rimage_0(0),raid1vol_rimage_1(0)
  [raid1vol_rimage_0] vg00 iwi-aor-
400.00m
/dev/loop0(1)
  [raid1vol_rimage_1] vg00 iwi-aor-
400.00m
/dev/loop1(1)
  [raid1vol_rmeta_0]  vg00 ewi-aor-
4.00m
/dev/loop0(0)
  [raid1vol_rmeta_1]  vg00 ewi-aor-
4.00m
/dev/loop1(0)

Now I will the metadata to other pvs:

truncate -s 100M meta1.dd
truncate -s 100M meta2.dd

 losetup /dev/loop2 meta1.dd
 losetup /dev/loop3 meta2.dd

 vgextend vg00 /dev/loop2
 vgextend vg00 /dev/loop3

pvmove -n 'raid1vol_rmeta_0' /dev/loop0 /dev/loop2
pvmove -n 'raid1vol_rmeta_1' /dev/loop1 /dev/loop3

vgchange -an vg00
  0 logical volume(s) in volume group "vg00" now active

vgchange -ay vg00
  1 logical volume(s) in volume group "vg00" now active




2017-09-25 11:30 GMT+02:00 Alexander 'Leo' Bergolth <leo at strike.wu.ac.at>:

> Hi!
>
> I tried to move the raid1 metadata subvolumes to different PVs (SSD
> devices for performance).
>
> Moving with pvmove works fine but activation fails when both legs of the
> metadata had been moved to external devices. (See below.)
>
> Interestingly moving just one metadata LV to another device works fine.
> (Raid LV can be activated afterwards.)
>
> I guess raid1 metadata on different PVs is not supported (yet)?
>
> I am using Centos 7.4 and kernel 3.10.0-693.el7.x86_64.
>
> Cheers,
> --leo
>
> -------------------- 8< --------------------
> modprobe zram num_devices=2
> echo 300M > /sys/block/zram0/disksize
> echo 300M > /sys/block/zram1/disksize
>
> pvcreate /dev/sda2
> pvcreate /dev/sdb2
> pvcreate /dev/zram0
> pvcreate /dev/zram1
>
> vgcreate vg_sys /dev/sda2 /dev/sdb2 /dev/zram0 /dev/zram1
> lvcreate --type raid1 -m 1 --regionsize 64M -L 500m -n lv_boot vg_sys
> /dev/sda2 /dev/sdb2
>
> pvmove -n 'lv_boot_rmeta_0' /dev/sda2 /dev/zram0
> # and maybe
> # pvmove -n 'lv_boot_rmeta_1' /dev/sdb2 /dev/zram1
>
> -------------------- 8< --------------------
>     Creating vg_sys-lv_boot
>         dm create vg_sys-lv_boot LVM-l6Eg7Uvcm2KieevnXDjLLje3wqmSVG
> a1e56whxycwUR2RvGvcQNLy1GdfpzlZuQk [ noopencount flush ]   [16384] (*1)
>     Loading vg_sys-lv_boot table (253:7)
>       Getting target version for raid
>         dm versions   [ opencount flush ]   [16384] (*1)
>       Found raid target v1.12.0.
>         Adding target to (253:7): 0 1024000 raid raid1 3 0 region_size
> 8192 2 253:3 253:4 253:5 253:6
>         dm table   (253:7) [ opencount flush ]   [16384] (*1)
>         dm reload   (253:7) [ noopencount flush ]   [16384] (*1)
>   device-mapper: reload ioctl on  (253:7) failed: Input/output error
> -------------------- 8< --------------------
> [ 8130.110467] md/raid1:mdX: active with 2 out of 2 mirrors
> [ 8130.111361] mdX: failed to create bitmap (-5)
> [ 8130.112254] device-mapper: table: 253:7: raid: Failed to run raid array
> [ 8130.113154] device-mapper: ioctl: error adding target to table
> -------------------- 8< --------------------
> # lvs -a -o+devices
>   LV                 VG     Attr       LSize   Pool Origin Data%  Meta%
> Move Log Cpy%Sync Convert Devices
>   lv_boot            vg_sys rwi---r--- 500.00m
>                          lv_boot_rimage_0(0),lv_boot_rimage_1(0)
>   [lv_boot_rimage_0] vg_sys Iwi-a-r-r- 500.00m
>                          /dev/sda2(1)
>   [lv_boot_rimage_1] vg_sys Iwi-a-r-r- 500.00m
>                          /dev/sdb2(1)
>   [lv_boot_rmeta_0]  vg_sys ewi-a-r-r-   4.00m
>                          /dev/zram0(0)
>   [lv_boot_rmeta_1]  vg_sys ewi-a-r-r-   4.00m
>                          /dev/zram1(0)
> -------------------- 8< --------------------
>
> Full vgchange output can be found at:
>   http://leo.kloburg.at/tmp/lvm-raid1-ext-meta/
>
>
> --
> e-mail   ::: Leo.Bergolth (at) wu.ac.at
> fax      ::: +43-1-31336-906050
> location ::: IT-Services | Vienna University of Economics | Austria
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>



-- 
  .~.
  /V\
 //  \\
/(   )\
^`~'^
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20170925/6c506765/attachment.htm>


More information about the linux-lvm mailing list