[linux-lvm] pvscan takes 45-90 minutes booting off ISO with thin pools
Patrick Mitchell
patricklmitchell9 at gmail.com
Thu May 17 00:03:56 UTC 2018
Changing the ISO's lvm.conf, setting "activation = 0" in global makes
it boot very quickly. I can then manually run a single "pvscan
--cache --activate ay" to activat everything, and it just takes a few
seconds. So, I'm thinking this has to be a locking problem with
trying to activate so many logical volumes and thin pools
simultaneously.
On Mon, May 14, 2018 at 1:02 AM, Patrick Mitchell
<patricklmitchell9 at gmail.com> wrote:
> Sometimes when booting off an Arch installation ISO (even recent
> kernel 4.16.8 & lvm2 2.02.177) LVM's pvscan takes 60-90 minutes. This
> is with large thin pools, which seems to have caused such delays for
> people in the past, with a fix being adding "--skip-mappings" in
> thin_check_options.
>
> This used to always happen when booting off an ISO, until I made a
> custom one with "--skip-mappings". With this, it's intermittent.
> Sometimes nearly instant, sometimes 45-90 minutes.
>
> This delay never happens when booting off an install on a drive. (I'm
> thinking there must be a cache that obviously doesn't exist on the
> ISO?)
>
> When there's a massive delay:
>
> root at archiso ~ # date && ps ax | grep scan
> Mon May 14 03:08:14 UTC 2018
> 717 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:65
> 718 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:19
> 719 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:51
> 720 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:115
> 721 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:99
> 722 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:68
> 724 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:52
> 725 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:49
> 727 ? S<s 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:67
> 728 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:66
> 731 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:83
> 733 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:50
> 748 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:2
> 752 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:1
> 753 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:3
> 754 ? S<Ls 0:01 /usr/bin/lvm pvscan --cache --activate ay 8:4
> 755 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:33
> 756 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:36
> 757 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:35
> 759 ? S<Ls 0:00 /usr/bin/lvm pvscan --cache --activate ay 8:34
> 768 ? S<Ls 0:01 /usr/bin/lvm pvscan --cache --activate ay 259:1
>
> And iotop shows 0 bytes being read or written for most of it.
>
> Is Arch using pvscan incorrectly? Is it meant for a process to be ran
> for each device? Is concurrently running a pvscan for each devicepath
> causing lock contention? Should Arch be running one instance of
> pvscan without device major and minor block numbers?
>
> Here is Arch's "lvm2-pvscan at .service"
>
> =====
>
> [Unit]
> Description=LVM2 PV scan on device %i
> Documentation=man:pvscan(8)
> DefaultDependencies=no
> StartLimitInterval=0
> BindsTo=dev-block-%i.device
> Requires=lvm2-lvmetad.socket
> After=lvm2-lvmetad.socket lvm2-lvmetad.service
> Before=shutdown.target
> Conflicts=shutdown.target
>
> [Service]
> Type=oneshot
> RemainAfterExit=yes
> ExecStart=/usr/bin/lvm pvscan --cache --activate ay %i
> ExecStop=/usr/bin/lvm pvscan --cache %i
More information about the linux-lvm
mailing list