[linux-lvm] lvm2-testsuite stability

Scott Moser smoser at brickies.net
Mon Jun 19 18:22:20 UTC 2023


Hi, thanks for your response.

> Yep - some tests are failing
>
> > expected-fail  api/dbustest.sh
>
> We do have them even split to individual tests;
> api/dbus_test_cache_lv_create.sh
> api/dbus_test_log_file_option.
sh

That is not available upstream, right?
I just saw the single 'dbustest.sh' in
[main/test](https://github.com/lvmteam/lvm2/tree/master/test/api).
Is there another branch I should be looking at?

> I'd likely need to get access/see  to the logs of such machines
> (or you would need to provide as some downloadable image of you Qemu machine
> installation)

The gist at https://gist.github.com/smoser/3107dafec490c0f4d9bf9faf02327f04
describes how I am doing this.  It is "standard" package build and autopkgtest
on debian/ubuntu.  The autopkgtest VM does not use LVM for the system
so we don't have to worry about interaction with that.

I could provide a vm image if you were interested.

> > Do others run this test-suite in automation and get reliable results ?
> >
>
> We surely do run these tests on regular basis on VM - so those are usually
> slightly modified to avoid collisions with tests.  There is also no
> strict rule to not break some 'tests' - so occasionally some tests can
> be failing for a while if they are seen 'less important' over some other
> bugs...

Identifying the set of tests that were allowed to fail in git
and gating pull requests on successful pass would be wonderful.  Without
some expected-working list, it is hard for me as a downstream user to
separate signal from noise.

Would upstream be open to pull requests that added test suite running
 via github actions?  is there some other preferred mechanism for such a thing?

The test suite is really well done. I was surprised how well it insulates
itself from the system and how easy it was to use.  Running it in a
distro would give the distro developer a *huge* boost in confidence when
attempting to integrate a new LVM release into the distro.

>
> We would need to think much harder if the test should be running with
> some daemons or autoactivation on the system that could see and could
> interact with our devices generated during the test run (one of the
> reasons machine for tests need some local modification - we may provide
> some Ansible-like testing script eventually.

Autopkgtest will
 * start a new vm for each run of the tests
 * install the packages listed as dependencies of the test.
 * run the test "entrypoint" (debian/test/testsuite).

I think that I have debian/test/testsuite correctly shutting
down/masking the necessary system services before invoking the tests. As
suggested in TESTING.

> But anyway - the easiest is to give us access to your test results so we
> could see whether there is something wrong with our test environment,
> lvm2 bug, or system setup - it's not always trivial to guess...

If you are willing to help, I can post a vm image somewhere. I suspect
you're not working with debian or ubuntu on a daily basis.  If you had
access to a debian or ubuntu system it would probably be easiest to
just let autopkgtest do the running. autopkgtest does provide a
`--shell` and `--shell-fail` parameter to put you into a root shell
after the tests.

My ultimate goal is to provide a distro with confidence that the lvm2
package they're integrating is working correctly.  I'm ok to skip
tests that provide noisy results.  In this case, having *some*
reliable test is a huge improvement.

Thanks,
Scott

On Mon, Jun 19, 2023 at 8:26 AM Zdenek Kabelac <zdenek.kabelac at gmail.com> wrote:
>
> Dne 15. 06. 23 v 20:02 Scott Moser napsal(a):
> > Hi,
> > [sorry for duplicate post, re-sending from a subscribed address]
> >
> > I'm looking to enable the lvm2 testsuite as an autopkgtest [1] to run
> > in debian and ubuntu. I have a merge request up at [2].  The general
> > idea is just to a.) package 'lvm2-testsuite' as an installable package
> > b.) run the testsuite as part of the autopkgtest.
> >
> > The version I'm testing on Ubuntu 22.04 is 2.03.16-3 from debian
> > (rebuilt for 22.04). I'm running udev-vanilla  in a 2 cpu/4GB VM, and
> > stopping/masking  the following services: dm-event lvm2-lvmpolld
> > lvm2-monitor lvm2-lvmdbusd .
> >
> > I'm seeing some failures when running the test.  Some seem expected
> > due to size limitations, some seem to fail every time, and some see
> > transient failures.
> >
> > Here is the list of tests that I'm seeing fail and my initial
> > categorization.  I've seen this across say half a dozen runs:
> >
>
> Yep - some tests are failing
>
> > expected-fail  api/dbustest.sh
>
> We do have them even split to individual tests;
>
> api/dbus_test_cache_lv_create.sh
> api/dbus_test_copy_signature.sh
> api/dbus_test_external_event.sh
> api/dbus_test_log_file_option.sh
> api/dbus_test_wipefs.sh
> api/dbus_test_z_sigint.sh
>
> these need to be fixed and resolved.
>
> > expected-fail  shell/lvconvert-repair-thin.sh
>
>
>
> > space-req      shell/lvcreate-large-raid.sh
> > space-req      shell/lvcreate-thin-limits.sh
> > expected-fail  shell/lvm-conf-error.sh
> > expected-fail  shell/lvresize-full.sh
> > timeout        shell/pvmove-abort-all.sh
> > space-req      shell/pvmove-basic.sh
> > expected-fail  shell/pvscan-autoactivation-polling.sh
> > expected-fail  shell/snapshot-merge.sh
> > space-req      shell/thin-large.sh
> > racy           shell/writecache-cache-blocksize.sh
>
> These are individual - we have some of those testing on some machines.
> They may need some 'extra' care.
>
> >
> > expected-fail fails most every time. timeout seems to work sometimes,
> > space-req i think is just space requirement issue (i'll just skip
> > those tests).
> >
>
> I'd likely need to get access/see  to the logs of such machines
> (or you would need to provide as some downloadable image of you Qemu machine
> installation)
>
>
> > The full output from the test run can be seen at [3] in the
> > testsuite-stdout.txt and testsuite-stderr.txt files.
> >
> > Do others run this test-suite in automation and get reliable results ?
> >
>
> We surely do run these tests on regular basis on VM - so those are usually
> slightly modified to avoid collisions with tests.
> There is also no strict rule to not break some 'tests' - so occasionally some
> tests can be failing for a while if they are seen 'less important' over some
> other bugs...
>
> We would need to think much harder if the test should be running with some
> daemons or autoactivation on the system that could see and could interact with
> our devices generated during the test run (one of the reasons machine for
> tests need some local modification - we may provide some Ansible-like testing
> script eventually.
>
> But anyway - the easiest is to give us access to your test results so we could
> see whether there is something wrong with our test environment,  lvm2 bug, or
> system setup - it's not always trivial to guess...
>
>
> Regards
>
> Zdenek
>
>



More information about the linux-lvm mailing list