[Avocado-devel] RFC: Test dependencies

Lukáš Doktor ldoktor at redhat.com
Tue Dec 8 11:44:22 UTC 2015


Dne 7.12.2015 v 16:46 Lukáš Doktor napsal(a):
> Dne 4.12.2015 v 17:32 Olav Philipp Henschel napsal(a):
>> Hello Lukáš,
>> I would like to clarify what's your idea of the control files, by giving
>> samples of how I imagine them.
>>
>> On 04-12-2015 11:59, Lukáš Doktor wrote:
>>> Something like control files
>>> ============================
>>>
>>> For more specific (QA) needs, we might quite easily allow to specify
>>> custom python (any language?) files, which would via API trigger
>>> tests. The possibilities would be limit-less, you could run several
>>> tests in parallel, wait for finish, interact with the jobs... whatever
>>> you want. As all tests stages are defined as callbacks, output plugins
>>> should handle this properly with 2 little catches:
>>>
>>> 1. console output - it'd correctly mark test starts, but the ends
>>> would be overlapping, but we already plan to rework this as we want to
>>> support running tests in parallel (one proposed solution is to display
>>> one of the running tests and circle through them, display the finished
>>> one with status and then pick the next one. My demonstration
>>> implementation should be hanging around)
>>>
>>> 2. multiplexer - people would have to write those files with
>>> multiplexer in mind. They might want to spawn multiple variants of the
>>> tests, or first run all tests in first variant and then the next
>>> one... I think the default should be let's run the whole "control"
>>> file in first variant, then second, ..., but we should allow people to
>>> iterate through variants while spawning the tests.
>>
>> If this custom python file receives as parameters both the multiplexer
>> variants and tests to run, this could be the default one:
>>
>> for variant in variants:
>>      for test in tests:
>>          avocado.runTest(variant, test)
>>
>> This would run all tests in first variant, then second...
>> If I wanted to run first all variants for the first test, then the
>> second test... I could just invert the fors:
>>
>> for test in tests:
>>      for variant in variants:
>>          avocado.runTest(variant, test)
>>
>> If I wanted to specify pre-conditions for the tests, I could create a
>> specific file like:
>>
>> for variant in variants:
>>      if avocado.runTest(variant, Test("unattended_install") != PASS
>>          return
>>      avocado.runTest(variant, Test("test1")
>>      avocado.runTest(variant, Test("test2")
>>      if avocado.runTest(variant, Test("update_image") != PASS
>>          return
>>      if avocado.runTest(variant, Test("unattended_install") != PASS
>>          return
>>      avocado.runTest(variant, Test("test3")
>>      avocado.runTest(variant, Test("test4")
>>
>> The downside is that skipped tests would not be marked as skipped. This
>> could be solved by adding a skip condition to tests:
>>
>> for variant in variants:
>>      skip_condition = avocado.runTest(variant,
>> Test("unattended_install")) != PASS
>>      avocado.runTest(variant, Test("test1"), skip_condition)
>>      avocado.runTest(variant, Test("test2"), skip_condition)
>>      skip_condition = avocado.runTest(variant, Test("update_image"),
>> skip_condition) != PASS
>>      skip_condition = avocado.runTest(variant,
>> Test("unattended_install"), skip_condition) != PASS
>>      avocado.runTest(variant, Test("test3"), skip_condition)
>>      avocado.runTest(variant, Test("test4"), skip_condition)
>>
>>
>> Does that look like what you were thinking? If not, could you provide
>> what would be a sample file?
>
> Yep, something like this, plus also async version, which would allow
> greater control:
>
> ```
>      avocado run my_control_file.py
> ```
>
> my_control_file.py:
>
> ```
> import avocado
> import time
>
> if __name__ == '__main__':
>      cleanup = False
>      test1 = avocado.runTest("foo")
>      test2 = avocado.runTest("bar")
>      time.sleep(5)
>      test3 = avocado.runTest("baz")
>      if test1.wait_for():
>          cleanup = True
>      if test2.status is None:
>          test2.abort("Explanation why we aborted the test")
>      avocado.wait_for()    # Wait for all tests to finish
>      if cleanup:
>          avocado.runTest("cleanup").wait_for()
> ```
>
> Would produce something like
>
> ```
> 2. bar: PASS
> 1. foo: FAIL
> 3. baz: PASS
> 4. cleanup: PASS
> ```
>
> or
>
> ```
> 1. foo: PASS
> 2. bar: ERROR
> 3. baz: PASS
> ```
>
> And the json/xunit output would report the tests as they were executed.
>
Let me expand it a bit. The way this should work is actually very 
similar to how the avocado.core.runner works and the implementation is 
actually just to cleanup and make the runner object available.

The way runner works in terms of this "control" file is:

```
for template in test_suite:
     for variant in mux.itertests(template):
         avocado.runTest(variant).wait_for()
```

Right now this only works synchronously (one test at a time), but we 
already plan to refactor it and allow multiple jobs at once. We can take 
this opportunity and make these entry points available. I'm talking about:

template = avocado.discover(url)
mux = avocado.mux(multiplex_files)
template_with_params = mux.itertests(template).next()
running_test = avocado.run_test(template_with_params)

where running_test supports:
* wait_for(timeout=None) => wait until test finishes
* status => The TestStatus class (query for status, get messages from 
test, ...)
* abort(reason) => abort the execution
...


By implementing this we'd get cleaner runner and if we make these 
available to users it'd allow them to write custom scripts while 
utilizing avocado infrastructure.

When running this inside avocado, they'd not need to do anything else as 
we setup the results ourselves. But they might as well pass custom 
results object (or instantiate our results object) to the runner and use 
it completely separate from avocado.


Another example usage of this API would be to use various multiplex 
files for different tests (or combinations):

```
all_nics = avocado.mux("nics.yaml")
all_cpus = avocado.mux("cpus.yaml")
assert len(nics) == len(cpus)

nics_test = avocado.discover(nics_related_test)
cpus_test = avocado.discover(cpus_related_test)
combination_test = avocado.discover(combination)

nics = all_nics.itertests(nics_test)
cpus = all_cpus.itertests(cpus_test)
comb = all_cpus.itertests(combination_test)

for _ in xrange(len(nics)):
     avocado.run_test(nics.next())
     avocado.run_test(mems.next())
     for tst in nics.itertests(comb.next()):
         avocado.run_test(tst)
     avocado.wait_for()
```

But basically the idea is still the same. Be able to specify test, 
execute it and wait for finish. Later even interact with it using the 
TestStatus. We'd be able to reuse existing simple tests and create 
complex setups.

> Regards,
> Lukáš
>
>
>>
>>
>> Regards,
>> Olav P. Henschel
>>
>> _______________________________________________
>> Avocado-devel mailing list
>> Avocado-devel at redhat.com
>> https://www.redhat.com/mailman/listinfo/avocado-devel
>
> _______________________________________________
> Avocado-devel mailing list
> Avocado-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/avocado-devel




More information about the Avocado-devel mailing list