[linux-lvm] lv raid - how to read this?
Heinz Mauelshagen
heinzm at redhat.com
Thu Sep 7 16:27:32 UTC 2017
Works fine on upstream with actual lvm:
[root at vm254 ~]# lvs
-aoname,size,attr,segtype,syncpercent,reshapelen,devices nvm
LV LSize Attr Type Cpy%Sync RSize Devices
r 3.67t rwi-a-r--- raid0
r_rimage_0(0),r_rimage_1(0),r_rimage_2(0),r_rimage_3(0)
[r_rimage_0] 939.52g iwi-aor--- linear /dev/sda(0)
[r_rimage_1] 939.52g iwi-aor--- linear /dev/sdb(0)
[r_rimage_2] 939.52g iwi-aor--- linear /dev/sdc(0)
[r_rimage_3] 939.52g iwi-aor--- linear /dev/sdd(0)
[root at vm254 ~]# time mkfs -t xfs -f /dev/nvm/r
<SNIP>
= sunit=4 swidth=16 blks
<SNIP>
What kernel/lvm versions are you running this on?
mkfs.xfs retrieves sizes from /sys/block/dm-N/queue/*size
(look up dm-N in /dev/mapper/) which may be bogus?
You can overwrite the kernel value by using swidth=N on the mkfs.xfs
command line.
Heinz
On 09/07/2017 03:12 PM, lejeczek wrote:
>
>
> On 07/09/17 10:16, Zdenek Kabelac wrote:
>> Dne 7.9.2017 v 10:06 lejeczek napsal(a):
>>> hi fellas
>>>
>>> I'm setting up a lvm raid0, 4 devices, I want raid0 and I understand
>>> & expect - there will be four stripes, all I care of is speed.
>>> I do:
>>> $ lvcreate --type raid0 -i 4 -I 16 -n 0 -l 96%pv intel.raid0-0
>>> /dev/sd{c..f} # explicitly four stripes
>>>
>>> I see:
>>> $ mkfs.xfs /dev/mapper/intel.sataA-0 -f
>>> meta-data=/dev/mapper/intel.sataA-0 isize=512 agcount=32,
>>> agsize=30447488 blks
>>> = sectsz=512 attr=2, projid32bit=1
>>> = crc=1 finobt=0, sparse=0
>>> data = bsize=4096 blocks=974319616, imaxpct=5
>>> = sunit=4 swidth=131076 blks
>>> naming =version 2 bsize=4096 ascii-ci=0 ftype=1
>>> log =internal log bsize=4096 blocks=475744, version=2
>>> = sectsz=512 sunit=4 blks,
>>> lazy-count=1
>>> realtime =none extsz=4096 blocks=0, rtextents=0
>>>
>>> What puzzles me is xfs's:
>>> sunit=4 swidth=131076 blks
>>> and I think - what the hexx?
>>
>>
>> Unfortunatelly 'swidth' in XFS has different meaning than lvm2's
>> stripe size parameter.
>>
>> In lvm2 -
>>
>>
>> -i | --stripes - how many disks
>> -I | --stripesize - how much data before using next disk.
>>
>> So -i 4 & -I 16 gives 64KB total stripe width
>>
>> ----
>>
>> XFS meaning:
>>
>> suinit = <RAID controllers stripe size in BYTES (or KiBytes when used
>> with k)>
>> swidth = <# of data disks (don't count parity disks)>
>>
>> ----
>>
>> ---- so real-world example ----
>>
>> # lvcreate --type striped -i4 -I16 -L1G -n r0 vg
>>
>> or
>>
>> # lvcreate --type raid0 -i4 -I16 -L1G -n r0 vg
>>
>> # mkfs.xfs /dev/vg/r0 -f
>> meta-data=/dev/vg/r0 isize=512 agcount=8, agsize=32764
>> blks
>> = sectsz=512 attr=2, projid32bit=1
>> = crc=1 finobt=1, sparse=0,
>> rmapbt=0, reflink=0
>> data = bsize=4096 blocks=262112, imaxpct=25
>> = sunit=4 swidth=16 blks
>> naming =version 2 bsize=4096 ascii-ci=0 ftype=1
>> log =internal log bsize=4096 blocks=552, version=2
>> = sectsz=512 sunit=4 blks, lazy-count=1
>> realtime =none extsz=4096 blocks=0, rtextents=0
>>
>>
>> ---- and we have ----
>>
>> sunit=4 ... 4 * 4096 = 16KiB (matching lvm2 -I16 here)
>> swidth=16 blks ... 16 * 4096 = 64KiB
>> so we have 64 as total width / size of single strip (sunit) -> 4
>> disks
>> (matching lvm2 -i4 option here)
>>
>> Yep complex, don't ask... ;)
>>
>>
>>
>>>
>>> In a LVM non-raid stripe scenario I've always remember it was:
>>> swidth = sunit * Y where Y = number of stripes, right?
>>>
>>> I'm hoping some expert could shed some light, help me(maybe others
>>> too) understand what LVM is doing there? I'd appreciate.
>>> many thanks, L.
>>
>>
>> We in the first place there is major discrepancy in the naming:
>>
>> You use intel.raid0-0 VG name
>> and then you mkfs device: /dev/mapper/intel.sataA-0 ??
>>
>> While you should be accessing: /dev/intel.raid0/0
>>
>> Are you sure you are not trying to overwrite some unrelated device here?
>>
>> (As your shown numbers looks unrelated, or you have buggy kernel or
>> blkid....)
>>
>
> hi,
> I renamed VG in the meantime,
> I get xfs intricacy..
> so.. question still stands..
> why xfs format does not do what I remember always did in the past(on
> lvm non-raid but stripped), like in your example
>
> = sunit=4 swidth=16 blks
> but I see instead:
>
> = sunit=4 swidth=4294786316 blks
>
> a whole lot:
>
> $ xfs_info /__.aLocalStorages/0
> meta-data=/dev/mapper/intel.raid0--0-0 isize=512 agcount=32,
> agsize=30768000 blks
> = sectsz=512 attr=2, projid32bit=1
> = crc=1 finobt=0 spinodes=0
> data = bsize=4096 blocks=984576000, imaxpct=5
> = sunit=4 swidth=4294786316 blks
> naming =version 2 bsize=4096 ascii-ci=0 ftype=1
> log =internal bsize=4096 blocks=480752, version=2
> = sectsz=512 sunit=4 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> $ lvs -a -o +segtype,stripe_size,stripes,devices intel.raid0-0
> LV VG Attr LSize Pool Origin Data%
> Meta% Move Log Cpy%Sync Convert Type Stripe #Str Devices
> 0 intel.raid0-0 rwi-aor---
> 3.67t raid0
> 16.00k 4 0_rimage_0(0),0_rimage_1(0),0_rimage_2(0),0_rimage_3(0)
> [0_rimage_0] intel.raid0-0 iwi-aor--- 938.96g linear 0 1
> /dev/sdc(0)
> [0_rimage_1] intel.raid0-0 iwi-aor--- 938.96g linear 0 1
> /dev/sdd(0)
> [0_rimage_2] intel.raid0-0 iwi-aor--- 938.96g linear 0 1
> /dev/sde(0)
> [0_rimage_3] intel.raid0-0 iwi-aor--- 938.96g linear 0 1
> /dev/sdf(0)
>
>
>>
>> Regards
>>
>> Zdenek
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
More information about the linux-lvm
mailing list