[linux-lvm] raid & its stripes
lejeczek
peljasz at yahoo.co.uk
Sat Sep 23 15:35:08 UTC 2017
On 23/09/17 16:31, lejeczek wrote:
>
>
> On 18/09/17 17:10, Brassow Jonathan wrote:
>>> On Sep 15, 2017, at 6:59 AM, lejeczek
>>> <peljasz at yahoo.co.uk> wrote:
>>>
>>>
>>> On 15/09/17 03:20, Brassow Jonathan wrote:
>>>> There is definitely a difference here. You have 2
>>>> stripes with 5 devices in each stripe. If you were
>>>> writing sequentially, you’d be bouncing between the
>>>> first 2 devices until they are full, then the next 2,
>>>> and so on.
>>>>
>>>> When using the -i argument, you are creating 10
>>>> stripes. Writing sequentially causes the writes to go
>>>> from one device to the next until all are written and
>>>> then starts back at the first. This is a very
>>>> different pattern.
>>>>
>>>> I think the result of any benchmark on these two very
>>>> different layouts would be significantly different.
>>>>
>>>> brassow
>>>>
>>>> BTW, I swear at one point that if you did not provide
>>>> the ‘-i’ it would use all of the devices as a stripe,
>>>> such that your two examples would result in the same
>>>> thing. I could be wrong though.
>>>>
>>> that's what I thought I remembered too.
>>> I guess a big question, from user/admin perspective is:
>>> are those two stripes LVM decides on(when no -i) is the
>>> best possible choice LVM makes after some elaborative
>>> determination so the number of stripes(no -i) would,
>>> might vary depending on raid type, phy devices number
>>> and maybe some other factors or, 2 stripes are simply
>>> hard-coded defaults?
>> If it is a change in behavior, I’m sure it came as the
>> result of some changes in the RAID handling code from
>> recent updates and is not due to some uber-intellegent
>> agent that is trying to figure out the best fit.
>>
>> brassow
>>
> but if confuses, current state of affairs is confusing. To
> add to it:
>
> ~]# lvcreate --type raid5 -n raid5-0 -l 96%vg caddy-six
> /dev/sd{a..f}
> Using default stripesize 64.00 KiB.
> Logical volume "raid5-0" created.
>
> ~]# lvs -a -o +stripes caddy-six
> LV VG Attr LSize Pool
> Origin Data% Meta% Move Log Cpy%Sync Convert #Str
> raid5-0 caddy-six rwi-a-r---
> 1.75t
> 0.28 3
> [raid5-0_rimage_0] caddy-six Iwi-aor---
> 894.25g
> 1
> [raid5-0_rimage_0] caddy-six Iwi-aor---
> 894.25g
> 1
> [raid5-0_rimage_1] caddy-six Iwi-aor---
> 894.25g
> 1
> [raid5-0_rimage_1] caddy-six Iwi-aor---
> 894.25g
> 1
> [raid5-0_rimage_2] caddy-six Iwi-aor---
> 894.25g
> 1
> [raid5-0_rimage_2] caddy-six Iwi-aor---
> 894.25g
> 1
> [raid5-0_rmeta_0] caddy-six ewi-aor---
> 4.00m
> 1
> [raid5-0_rmeta_1] caddy-six ewi-aor---
> 4.00m
> 1
> [raid5-0_rmeta_2] caddy-six ewi-aor---
> 4.00m
> 1
>
> VG and LV upon creating was told: use 6 PVs.
> How can we rely on what lvcreate does when left to decide
> and/or use defaults?
> Is above example with raid5 what LVM is suppose to do? Is
> it even correct raid5 layout(six phy disks)?
>
> regards.
>
>
~]# lvs -a -o +stripes,devices caddy-six
LV VG Attr LSize Pool
Origin Data% Meta% Move Log Cpy%Sync Convert #Str Devices
raid5-0 caddy-six rwi-a-r---
1.75t 2.36
3 raid5-0_rimage_0(0),raid5-0_rimage_1(0),raid5-0_rimage_2(0)
[raid5-0_rimage_0] caddy-six Iwi-aor---
894.25g
1 /dev/sda(1)
[raid5-0_rimage_0] caddy-six Iwi-aor---
894.25g
1 /dev/sdd(0)
[raid5-0_rimage_1] caddy-six Iwi-aor---
894.25g
1 /dev/sdb(1)
[raid5-0_rimage_1] caddy-six Iwi-aor---
894.25g
1 /dev/sde(0)
[raid5-0_rimage_2] caddy-six Iwi-aor---
894.25g
1 /dev/sdc(1)
[raid5-0_rimage_2] caddy-six Iwi-aor---
894.25g
1 /dev/sdf(0)
[raid5-0_rmeta_0] caddy-six ewi-aor---
4.00m
1 /dev/sda(0)
[raid5-0_rmeta_1] caddy-six ewi-aor---
4.00m
1 /dev/sdb(0)
[raid5-0_rmeta_2] caddy-six ewi-aor---
4.00m
1 /dev/sdc(0)
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
.
More information about the linux-lvm
mailing list