[linux-lvm] ThinPool performance problem with NVMe

Anton Kulshenko shallriseagain at gmail.com
Fri Jun 23 23:23:09 UTC 2023


Please help me figure out what my problem is. No matter how I configure the
system, I can't get high performance, especially on writes.

OS: Oracle Linux 8.6, 5.4.17-2136.311.6.el8uek.x86_64
Platform: Gigabyte R282-Z94 with 2x 7702 64cores AMD EPYC and 2 TB of RAM
Disks: NVMe Samsung PM1733 7.68 TB

What I do:
vgcreate vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
lvcreate -n thin_pool_1 -L 20T vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
/dev/nvme3n1 -i 4 -I 4

-i4 for striping between all disks, -I4 strip size. Also I tried 8, 16,
32... In my setup I can't find a big difference.

lvcreate -n pool_meta -L 15G vg1 /dev/nvme4n1
lvconvert --type thin-pool --poolmetadata vg1/pool_meta vg1/thin_pool_1
lvchange -Zn vg1/thin_pool_1
lvcreate -V 15000G --thin -n data vg1/thin_pool_1

After that I create a load using the FIO with parameters:
fio --filename=/dev/mapper/vg1-data --rw=randwrite --bs=4k --name=test
--numjobs=32 --iodepth=32 --random_generator=tausworthe64
--numa_cpu_nodes=0 --direct=1

I only get 40k iops, while one drive at the same load easily gives 130k
I have tried different block sizes, strip sizes, etc. with no result. When
I look in iostat I see the load on the disk where the metadata is:
80 WMB/s, 12500 wrqm/s, 68 %wrqm

I don't understand what I'm missing when configuring the system.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20230623/a2448ff0/attachment.htm>

More information about the linux-lvm mailing list