[Avocado-devel] RFC: Test dependencies

Olav Philipp Henschel olavph at linux.vnet.ibm.com
Fri Dec 4 16:32:10 UTC 2015


Hello Lukáš,
I would like to clarify what's your idea of the control files, by giving 
samples of how I imagine them.

On 04-12-2015 11:59, Lukáš Doktor wrote:
> Something like control files
> ============================
>
> For more specific (QA) needs, we might quite easily allow to specify 
> custom python (any language?) files, which would via API trigger 
> tests. The possibilities would be limit-less, you could run several 
> tests in parallel, wait for finish, interact with the jobs... whatever 
> you want. As all tests stages are defined as callbacks, output plugins 
> should handle this properly with 2 little catches:
>
> 1. console output - it'd correctly mark test starts, but the ends 
> would be overlapping, but we already plan to rework this as we want to 
> support running tests in parallel (one proposed solution is to display 
> one of the running tests and circle through them, display the finished 
> one with status and then pick the next one. My demonstration 
> implementation should be hanging around)
>
> 2. multiplexer - people would have to write those files with 
> multiplexer in mind. They might want to spawn multiple variants of the 
> tests, or first run all tests in first variant and then the next 
> one... I think the default should be let's run the whole "control" 
> file in first variant, then second, ..., but we should allow people to 
> iterate through variants while spawning the tests.

If this custom python file receives as parameters both the multiplexer 
variants and tests to run, this could be the default one:

for variant in variants:
     for test in tests:
         avocado.runTest(variant, test)

This would run all tests in first variant, then second...
If I wanted to run first all variants for the first test, then the 
second test... I could just invert the fors:

for test in tests:
     for variant in variants:
         avocado.runTest(variant, test)

If I wanted to specify pre-conditions for the tests, I could create a 
specific file like:

for variant in variants:
     if avocado.runTest(variant, Test("unattended_install") != PASS
         return
     avocado.runTest(variant, Test("test1")
     avocado.runTest(variant, Test("test2")
     if avocado.runTest(variant, Test("update_image") != PASS
         return
     if avocado.runTest(variant, Test("unattended_install") != PASS
         return
     avocado.runTest(variant, Test("test3")
     avocado.runTest(variant, Test("test4")

The downside is that skipped tests would not be marked as skipped. This 
could be solved by adding a skip condition to tests:

for variant in variants:
     skip_condition = avocado.runTest(variant, 
Test("unattended_install")) != PASS
     avocado.runTest(variant, Test("test1"), skip_condition)
     avocado.runTest(variant, Test("test2"), skip_condition)
     skip_condition = avocado.runTest(variant, Test("update_image"), 
skip_condition) != PASS
     skip_condition = avocado.runTest(variant, 
Test("unattended_install"), skip_condition) != PASS
     avocado.runTest(variant, Test("test3"), skip_condition)
     avocado.runTest(variant, Test("test4"), skip_condition)


Does that look like what you were thinking? If not, could you provide 
what would be a sample file?


Regards,
Olav P. Henschel




More information about the Avocado-devel mailing list