[linux-lvm] [ceph-users] poor read performance on rbd+LVM, LVM overload
josh.durgin at inktank.com
Sun Oct 20 18:21:24 UTC 2013
On 10/20/2013 08:18 AM, Ugis wrote:
>>> output follows:
>>> #pvs -o pe_start /dev/rbd1p1
>>> 1st PE
>>> # cat /sys/block/rbd1/queue/minimum_io_size
>>> # cat /sys/block/rbd1/queue/optimal_io_size
>> Well, the parameters are being set at least. Mike, is it possible that
>> having minimum_io_size set to 4m is causing some read amplification
>> in LVM, translating a small read into a complete fetch of the PE (or
>> somethinga long those lines)?
>> Ugis, if your cluster is on the small side, it might be interesting to see
>> what requests the client is generated in the LVM and non-LVM case by
>> setting 'debug ms = 1' on the osds (e.g., ceph tell osd.* injectargs
>> '--debug-ms 1') and then looking at the osd_op messages that appear in
>> /var/log/ceph/ceph-osd*.log. It may be obvious that the IO pattern is
> Sage, here follows debug output. I am no pro in reading this, but
> seems read block size differ(or what is that number following ~ sign)?
Yes, that's the I/O length. LVM is sending requests for 4k at a time,
while plain kernel rbd is sending 128k.
<request logs showing this>
> How to proceed with tuning read performance on LVM? Is there some
> chanage needed in code of ceph/LVM or my config needs to be tuned?
> If what is shown in logs means 4k read block in LVM case - then it
> seems I need to tell LVM(or xfs on top of LVM dictates read block
> side?) that io block should be rather 4m?
It's a client side issue of sending much smaller requests than it needs
to. Check the queue minimum and optimal sizes for the lvm device - it
sounds like they might be getting set to 4k for some reason.
More information about the linux-lvm