[dm-devel] max_sectors_kb limitations with VDO and dm-thin
Ryan Norwood
ryan.p.norwood at gmail.com
Wed Apr 24 14:46:00 UTC 2019
On Wed, Apr 24, 2019 at 9:08 AM Ryan Norwood <ryan.p.norwood at gmail.com>
wrote:
> Thank you for your help.
>
> You are correct, it appears that the problem occurs when there is a RAID 5
> or RAID 50 volume beneath VDO.
>
> NAME KNAME RA SIZE ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC
> RQ-SIZE SCHED WSAME
> sdh sdh
> 128 977.5G 0 512 0 512 512 128 deadline
> 0B
> └─sed6
> dm-6 128 977.5G 0 512 0 512 512 128
> 0B
> └─md127
> md127 12288 5.7T 0 1048576 6291456 512 512 128
> 0B
> └─vdo_data
> dm-17 128 5.7T 0 1048576 6291456 512 512 128
> 0B
> └─vdo
> dm-18 128 57.3T 0 4096 4096 4096 4096 128
> 0B
>
> */sys/block/md126/queue/max_hw_sectors_kb:2147483647*
> /sys/block/md126/queue/max_integrity_segments:0
> */sys/block/md126/queue/max_sectors_kb:512*
> /sys/block/md126/queue/max_segments:64
> /sys/block/md126/queue/max_segment_size:4096
>
> */sys/block/dm-17/queue/max_hw_sectors_kb:512*
> /sys/block/dm-17/queue/max_integrity_segments:0
> */sys/block/dm-17/queue/max_sectors_kb:512*
> /sys/block/dm-17/queue/max_segments:64
> /sys/block/dm-17/queue/max_segment_size:4096
>
> */sys/block/dm-18/queue/max_hw_sectors_kb:4*
> /sys/block/dm-18/queue/max_integrity_segments:0
> */sys/block/dm-18/queue/max_sectors_kb:4*
> /sys/block/dm-18/queue/max_segments:64
> /sys/block/dm-18/queue/max_segment_size:4096
>
> NAME KNAME RA SIZE ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC
> RQ-SIZE SCHED WSAME
> sdq sdq 128 977.5G 0 512 0 512 512
> 128 deadline 0B
> └─sed15 dm-15 128 977.5G 0 512 0 512 512
> 128 0B
> └─vdo dm-16 128 57.3T 0 4096 4096 4096 4096
> 128 0B
>
> */sys/block/sdq/queue/max_hw_sectors_kb:256*
> /sys/block/sdq/queue/max_integrity_segments:0
> */sys/block/sdq/queue/max_sectors_kb:256*
> /sys/block/sdq/queue/max_segments:64
> /sys/block/sdq/queue/max_segment_size:65536
>
> */sys/block/dm-15/queue/max_hw_sectors_kb:256*
> /sys/block/dm-15/queue/max_integrity_segments:0
> */sys/block/dm-15/queue/max_sectors_kb:256*
> /sys/block/dm-15/queue/max_segments:64
> /sys/block/dm-15/queue/max_segment_size:4096
>
> */sys/block/dm-16/queue/max_hw_sectors_kb:256*
> /sys/block/dm-16/queue/max_integrity_segments:0
> */sys/block/dm-16/queue/max_sectors_kb:256*
> /sys/block/dm-16/queue/max_segments:64
> /sys/block/dm-16/queue/max_segment_size:4096
>
>
>
>
>
>
>
>
> On Tue, Apr 23, 2019 at 9:11 PM Sweet Tea Dorminy <sweettea at redhat.com>
> wrote:
>
>> One piece of this that I'm not following:
>>
>> > Now fast forward to VDO. Normally the IO size is determined by the
>> max_sectors_kb setting in /sys/block/DEVICE/queue. This value is inherited
>> for stacked DM devices and can be modified by the user up to the hardware
>> limit max_hw_sectors_kb, which also appears to be inherited for stacked DM
>> devices. VDO sets this value to 4k which in turn forces all layers stacked
>> above it to also have a 4k maximum. If you take my previous example but
>> place VDO beneath the dm-thin volume, all IO sequential or otherwise will
>> be split down to 4k which will completely eliminate all the performance
>> optimizations that dm-thin provides.
>>
>> I am unable to find a place that VDO is setting max_sectors, and
>> indeed I cannot reproduce this -- I stack VDO atop various disks of
>> max_hw_sectors_kb of 256, 512, or 1280, and VDO reports max_sectors_kb
>> of [underlying max_hw_sectors_kb]. I'm suspicious that it's some other
>> setting that is going wonky... can you recheck whether max_sectors_kb
>> is changing between (device under VDO) and (VDO device)?
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20190424/b335a818/attachment.htm>
More information about the dm-devel
mailing list