[linux-lvm] Thin Pool Performance

shankha shankhabanerjee at gmail.com
Wed Apr 20 15:55:59 UTC 2016


I am sorry. I forgot to post the workload.

The fio benchmark configuration.

[zipf write]
direct=1
rw=randrw
ioengine=libaio
group_reporting
rwmixread=0
bs=4k
iodepth=32
numjobs=8
runtime=3600
random_distribution=zipf:1.8
Thanks
Shankha Banerjee


On Wed, Apr 20, 2016 at 9:34 AM, shankha <shankhabanerjee at gmail.com> wrote:
> Hi,
> I had just one thin logical volume and running fio benchmarks. I tried
> having the metadata on a raid0. There was minimal increase in
> performance. I had thin pool zeroing switched on. If I switch off
> thin pool zeroing initial allocations were faster but the final
> numbers are almost similar. The size of the thin poll metadata LV was
> 16 GB.
> Thanks
> Shankha Banerjee
>
>
> On Tue, Apr 19, 2016 at 4:11 AM, Zdenek Kabelac <zkabelac at redhat.com> wrote:
>> Dne 19.4.2016 v 03:05 shankha napsal(a):
>>>
>>> Hi,
>>> Please allow me to describe our setup.
>>>
>>> 1) 8 SSDS with a raid5 on top of it. Let us call the raid device :
>>> dev_raid5
>>> 2) We create a Volume Group on dev_raid5
>>> 3) We create a thin pool occupying 100% of the volume group.
>>>
>>> We performed some experiments.
>>>
>>> Our random write operations dropped  by half and there was significant
>>> reduction for
>>> other operations(sequential read, sequential write, random reads) as
>>> well compared to native raid5
>>>
>>> If you wish I can share the data with you.
>>>
>>> We then changed our configuration from one POOL to 4 POOLS and were able
>>> to
>>> get back to 80% of the performance (compared to native raid5).
>>>
>>> To us it seems that the lvm metadata operations are the bottleneck.
>>>
>>> Do you have any suggestions on how to get back the performance with lvm ?
>>>
>>> LVM version:     2.02.130(2)-RHEL7 (2015-12-01)
>>> Library version: 1.02.107-RHEL7 (2015-12-01)
>>>
>>
>>
>> Hi
>>
>>
>> Thanks for playing with thin-pool, however your report is largely
>> incomplete.
>>
>> We do not see you actual VG setup.
>>
>> Please attach  'vgs/lvs'  i.e. thin-pool zeroing (if you don't need it keep
>> it disabled), chunk size (use bigger chunks if you do not need snapshots),
>> number of simultaneously active thin volumes in single thin-pool (running
>> hundreds of loaded thinLV is going to loose battle on locking) , size of
>> thin pool metadata LV -  is this LV located on separate device (you should
>> not use RAID5 with metatadata)
>> and what kind of workload you try on ?
>>
>> Regards
>>
>> Zdenek
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




More information about the linux-lvm mailing list