[linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)

Zdenek Kabelac zkabelac at redhat.com
Tue Nov 8 09:26:15 UTC 2016


Dne 7.11.2016 v 16:58 Alexander 'Leo' Bergolth napsal(a):
> On 11/07/2016 11:22 AM, Zdenek Kabelac wrote:
>> Dne 7.11.2016 v 10:30 Alexander 'Leo' Bergolth napsal(a):
>>> I am experiencing a dramatic degradation of the sequential write speed
>>> on a raid1 LV that resides on two USB-3 connected harddisks (UAS
>>> enabled), compared to parallel access to both drives without raid or
>>> compared to MD raid:
>>>
>>> - parallel sequential writes LVs on both disks: 140 MB/s per disk
>>> - sequential write to MD raid1 without bitmap: 140 MB/s
>>> - sequential write to MD raid1 with bitmap: 48 MB/s
>>> - sequential write to LVM raid1: 17 MB/s !!
>>>
>>> According to the kernel messages, my 30 GB raid1-test-LV gets equipped
>>> with a 61440 bit write-intent bitmap (1 bit per 512 byte data?!) whereas
>>> a default MD raid1 bitmap only has 480 bit size. (1 bit per 64 MB).
>>> Maybe the dramatic slowdown is caused by this much too fine grained
>>> bitmap and its updates, which are random IO?
>>>
>>> Is there a way to configure the bitmap size?
>>
>> Can you please provide some results with  '--regionsize'  changes ?
>> While   '64MB' is quite 'huge'  for resync I guess 'the current' default
>> picked region size is likely very very small in same cases.
>
> Ah - thanks. Didn't know that --regionsize is also valid for --type raid1.
>
> With --regionsize 64 MB, the bitmap has the same size as the default
> bitmap created by mdadmin and write performance is also similar:
>
> *** --regionsize 1M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 63,957 s, 16,4 MB/s
> *** --regionsize 2M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 39,1517 s, 26,8 MB/s
> *** --regionsize 4M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 32,8275 s, 31,9 MB/s
> *** --regionsize 16M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 30,2903 s, 34,6 MB/s
> *** --regionsize 32M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 30,1452 s, 34,8 MB/s
> *** --regionsize 64M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 21,6208 s, 48,5 MB/s
> *** --regionsize 128M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 14,2028 s, 73,8 MB/s
> *** --regionsize 256M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 11,6581 s, 89,9 MB/s
>
>
> Is there a way to change the regionsize for an existing LV?


I'm afraid there is not yet support for runtime 'regionsize' change
other then rebuilding array.

But your numbers are really the item to think about.

Lvm2 surely should pick here more sensible default value.

But md raid still seems to pay too big price even with 64M - there
is likely some room for improvement here I'd say...

Regards

Zdenek





More information about the linux-lvm mailing list