[linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)

Alexander 'Leo' Bergolth leo at strike.wu.ac.at
Fri Nov 18 11:08:23 UTC 2016


On 11/18/2016 11:12 AM, Zdenek Kabelac wrote:
> Dne 8.11.2016 v 16:15 Alexander 'Leo' Bergolth napsal(a):
>> On 11/08/2016 10:26 AM, Zdenek Kabelac wrote:
>>> Dne 7.11.2016 v 16:58 Alexander 'Leo' Bergolth napsal(a):
>>>> On 11/07/2016 11:22 AM, Zdenek Kabelac wrote:
>>>> Is there a way to change the regionsize for an existing LV?
>>>
>>> I'm afraid there is not yet support for runtime 'regionsize' change
>>> other then rebuilding array.
>>
>> Unfortunately even rebuilding (converting to linear and back to raid1)
>> doesn't work.
>>
>> lvconvert seems to ignore the --regionsize option and use defaults:
>>
>> lvconvert -m 0 /dev/vg_sys/lv_test
>> lvconvert --type raid1 -m 1 --regionsize 128M /dev/vg_sys/lv_test
>>
>> [10881847.012504] mdX: bitmap initialized from disk: read 1 pages, set
>> 4096 of 4096 bits
>>
>> ... which translates to a regionsize of 512k for a 2G volume.
> 
>
> After doing some simulations here -
> 
> What is the actually USB device type used here?

I did my tests with two 5k-RPM SATA disks connected to a single USB 3.0
port using a JMS562 USB 3.0 to SATA bridge in JBOD mode. According to
lsusb -t, the uas module is in use and looking at
/sys/block/sdX/queue/nr_requests, command queuing seems to be active.

I've discussed my problems with Heinz Mauelshagen yesterday, who was
able to reproduce the issue using two SATA disks, connected to two USB
3.0 ports that share the same USB bus. However, he didn't notice any
speed penalties if the same disks are connected to different USB buses.

So it looks like the problem is USB related...

Cheers,
--leo
-- 
e-mail   ::: Leo.Bergolth (at) wu.ac.at
fax      ::: +43-1-31336-906050
location ::: IT-Services | Vienna University of Economics | Austria




More information about the linux-lvm mailing list