[lvm-devel] thin vol write performance variance

Zdenek Kabelac zkabelac at redhat.com
Thu Sep 16 19:59:11 UTC 2021


Dne 15. 09. 21 v 9:02 Lakshmi Narasimhan Sundararajan napsal(a):
> Hi Team,
> A very good day to you.
> 
> I have a lvm2 thin pool and thin volumes in my environment.
> I see a huge variance in write performance over those thin volumes.
> As one can observe from the logs below, the same quantum of write
> (~1.5G) to the thin volume (/dev/pwx0/608561273872404373) completes
> between 2s to 40s.
> The metadata is  defined as 128 sectors (64KB) on the thin pool.
> I understand that there is a late mapping of segments to thin volumes
> as IO requests come in.
> Is there a way to test/quantity that the overhead is because of this
> lazy mapping?
> Are there any other config/areas that I can tune to control this behavior?
> Are there any tunables/ioctl to control mapping regions ahead of time
> (ala readahead)?
> Any other options available to confirm this behavior is because of the
> lazy mapping and ways to improve it?
> 
> My intention is to improve this behavior and control the variance to a
> more tight bound.
> Looking forward to your inputs in helping me understand this better.
> 

Hi

I think we need to 'decipher' first some origins of your problems.

So what is your backend 'storage' in use?
Do you use fast device like ssd/nvme to store thin-pool metadata?

Do you measure your time *after* sync all of 'unwritten/buffered' data on disk ?

What is actually your hw in use  - RAM, CPU ?

Which kernel and lvm2 version is being used ?

Do you use/need zeroing of provisioned blocks (which may impact performance 
and can be disabled with lvcreate -Zn) ?

Do you measure writes while provisioning thin chunks or on already provisioned 
device?


Regards

Zdenek




More information about the lvm-devel mailing list