[Avocado-devel] RFC: Test dependencies

Lukáš Doktor ldoktor at redhat.com
Fri Dec 4 13:59:51 UTC 2015


RFC: Test dependencies

Hello guys,

While setting testing with Jenkins and avocado, I often needed to define 
what jobs should be executed in which conditions.  One solution was to 
split the jobs into various dependent Jenkins jobs. This works fine, but 
the results are also scattered across multiple jobs and sometimes making 
it harder to review the results, so I think we can do better.

I thought about it and two ideas came into my mind, booth fitting the 
Avocado structure and in my opinion booth have the possibility of being 
quite powerful while easy to learn or understand even without prior 
knowledge.


Simple ad-hoc dependencies
==========================

Example usage: Running:

     1. unattended_install
     2. test1
     3. test2
     4. update_image
     5. unattended_install
     6. test3
     7. test4

Obviously we don't want to run the various tests unless 
unattended_install passed. Similar with the rest of the tests. One way 
could be to use hardcoded dependencies (as in virt-test, where some 
tests depends on unattended_install/setup/...). I'd like to propose 
another, more flexible, solution. How about implementing special tests 
which would work as conditions. The example usage would be:

     avocado run unattended_install @skipOnFailure @begin test1 test2 
@end update_image @skipOnFailure unattended_image @skipOnFailure @begin 
test3 test4 @end

The naming and possible (or none) prefix is only for example. I think 
this would be very flexible and as all these would be implemented as 
tests, it'd be also easily extensible (one might inherit from 
@skipOnFailure and modify the condition).

Another possible improvement is to support parameters, eg: 
"@skipOnFailure:'Install failed, skipping tests'".

We could go even further and make these tests even more special and pass 
the `job` to the test object. Then one could access the results of 
previous tests and tailor the condition to their needs.


Something like control files
============================

For more specific (QA) needs, we might quite easily allow to specify 
custom python (any language?) files, which would via API trigger tests. 
The possibilities would be limit-less, you could run several tests in 
parallel, wait for finish, interact with the jobs... whatever you want. 
As all tests stages are defined as callbacks, output plugins should 
handle this properly with 2 little catches:

1. console output - it'd correctly mark test starts, but the ends would 
be overlapping, but we already plan to rework this as we want to support 
running tests in parallel (one proposed solution is to display one of 
the running tests and circle through them, display the finished one with 
status and then pick the next one. My demonstration implementation 
should be hanging around)

2. multiplexer - people would have to write those files with multiplexer 
in mind. They might want to spawn multiple variants of the tests, or 
first run all tests in first variant and then the next one... I think 
the default should be let's run the whole "control" file in first 
variant, then second, ..., but we should allow people to iterate through 
variants while spawning the tests.


Kind regards,
Lukáš




More information about the Avocado-devel mailing list