[Avocado-devel] Tips for making a standalone test script Avocado-friendly?

Lucas Meneghel Rodrigues lookkas at gmail.com
Wed Apr 5 14:09:38 UTC 2017


Some quick thoughts about what you could do (points 1 and 2, other
paragraphs are more thoughts on making avocado better for such cases):

1) For the cases using unittest, you could try to import avocado, if it
fails, fall back to unittest.TestCase, such as

try:
    from avocado import Test as TestClass
    from avocado import main
except ImportError:
    from unittest import TestCase as TestClass
...
Make the classes inherit from TestClass and use

if __name__ == '__main__':
    main()

2) For the tests that use the main() entry point, you can refactor main()
slightly to separate the argument parsing from test execution, and then
implement a small avocado test class that calls the test execution routine.
This way the script works standalone and avocado can still run and execute
the code. You won't get per dynamically generated test function granularity
in the runner. See last paragraph for thoughts on this.

A more complicated and long term solution would be to make avocado more
like pytest, in the sense of making the avocado test runner, on top of
running avocado instrumented test classes, also able to run arbitrary
callables that have certain names, such as `test_something`.

A final thought about dynamically generated testing functions: For a test
runner that has to inspect the files to figure out what is runnable to
generate a list of tests though, dynamic function generation makes things
harder. Maybe we can come up with an idea to make avocado aware of
dynamically generated callables and somehow make the avocado test
loader/runner able to locate them properly and run them as tests.

Maybe we could try to inspect the current global scope of imported test
modules for callables that have certain names and execute them as avocado
tests?

Let me know if this helps.

Cheers,

Lucas

On Wed, Apr 5, 2017 at 3:01 PM Eduardo Habkost <ehabkost at redhat.com> wrote:

>
> Hi,
>
> I have been writing a few standalone Python scripts[1] to test
> QEMU recently, and I would like to make them more useful for
> people running tests using Avocado.
>
> Most of them work this way:
> 1) Query QEMU to check which
>    architectures/machine-types/CPU-models/devices/options
>    it supports
> 2) Run QEMU multiple times for each
>    architectures/machine-types/CPU-models/devices/options
>    combination I want to test
> 3) Report success/failure/skip results (sometimes including
>    warnings) for each combination
>
> I would like to keep the test scripts easy to run without
> installing extra dependencies, so I want them to keep working as
> standalone scripts even if Avocado modules aren't available.
> Adding a few "if avocado_available:" lines to the script would be
> OK, though.
>
> Do you have any suggestions for making the test result output
> from those scripts easily consumable by the Avocado test runner?
>
>
> [1] Some examples:
>
> https://github.com/ehabkost/qemu-hacks/blob/work/device-crash-script/scripts/device-crash-test.py
>
> https://github.com/ehabkost/qemu-hacks/blob/work/x86-query-cpu-expansion-test/tests/query-cpu-model-test.py
>
> https://github.com/ehabkost/qemu-hacks/blob/work/query-machines-bus-info/tests/qmp-machine-info.py
>     (Note that some of the scripts use the unittest module, but I
>     will probably get rid of it, because the list of test cases I
>     want to run will be generated at runtime. I've even wrote
>     code to add test methods dynamically to the test class, but I
>     will probably remove that hack because it's not worth the
>     extra complexity)
>
> --
> Eduardo
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/avocado-devel/attachments/20170405/ef26aa71/attachment.htm>


More information about the Avocado-devel mailing list