[Avocado-devel] RFC: Test dependencies

Lukáš Doktor ldoktor at redhat.com
Mon Dec 7 15:46:23 UTC 2015


Dne 4.12.2015 v 17:32 Olav Philipp Henschel napsal(a):
> Hello Lukáš,
> I would like to clarify what's your idea of the control files, by giving
> samples of how I imagine them.
>
> On 04-12-2015 11:59, Lukáš Doktor wrote:
>> Something like control files
>> ============================
>>
>> For more specific (QA) needs, we might quite easily allow to specify
>> custom python (any language?) files, which would via API trigger
>> tests. The possibilities would be limit-less, you could run several
>> tests in parallel, wait for finish, interact with the jobs... whatever
>> you want. As all tests stages are defined as callbacks, output plugins
>> should handle this properly with 2 little catches:
>>
>> 1. console output - it'd correctly mark test starts, but the ends
>> would be overlapping, but we already plan to rework this as we want to
>> support running tests in parallel (one proposed solution is to display
>> one of the running tests and circle through them, display the finished
>> one with status and then pick the next one. My demonstration
>> implementation should be hanging around)
>>
>> 2. multiplexer - people would have to write those files with
>> multiplexer in mind. They might want to spawn multiple variants of the
>> tests, or first run all tests in first variant and then the next
>> one... I think the default should be let's run the whole "control"
>> file in first variant, then second, ..., but we should allow people to
>> iterate through variants while spawning the tests.
>
> If this custom python file receives as parameters both the multiplexer
> variants and tests to run, this could be the default one:
>
> for variant in variants:
>      for test in tests:
>          avocado.runTest(variant, test)
>
> This would run all tests in first variant, then second...
> If I wanted to run first all variants for the first test, then the
> second test... I could just invert the fors:
>
> for test in tests:
>      for variant in variants:
>          avocado.runTest(variant, test)
>
> If I wanted to specify pre-conditions for the tests, I could create a
> specific file like:
>
> for variant in variants:
>      if avocado.runTest(variant, Test("unattended_install") != PASS
>          return
>      avocado.runTest(variant, Test("test1")
>      avocado.runTest(variant, Test("test2")
>      if avocado.runTest(variant, Test("update_image") != PASS
>          return
>      if avocado.runTest(variant, Test("unattended_install") != PASS
>          return
>      avocado.runTest(variant, Test("test3")
>      avocado.runTest(variant, Test("test4")
>
> The downside is that skipped tests would not be marked as skipped. This
> could be solved by adding a skip condition to tests:
>
> for variant in variants:
>      skip_condition = avocado.runTest(variant,
> Test("unattended_install")) != PASS
>      avocado.runTest(variant, Test("test1"), skip_condition)
>      avocado.runTest(variant, Test("test2"), skip_condition)
>      skip_condition = avocado.runTest(variant, Test("update_image"),
> skip_condition) != PASS
>      skip_condition = avocado.runTest(variant,
> Test("unattended_install"), skip_condition) != PASS
>      avocado.runTest(variant, Test("test3"), skip_condition)
>      avocado.runTest(variant, Test("test4"), skip_condition)
>
>
> Does that look like what you were thinking? If not, could you provide
> what would be a sample file?

Yep, something like this, plus also async version, which would allow 
greater control:

```
     avocado run my_control_file.py
```

my_control_file.py:

```
import avocado
import time

if __name__ == '__main__':
     cleanup = False
     test1 = avocado.runTest("foo")
     test2 = avocado.runTest("bar")
     time.sleep(5)
     test3 = avocado.runTest("baz")
     if test1.wait_for():
         cleanup = True
     if test2.status is None:
         test2.abort("Explanation why we aborted the test")
     avocado.wait_for()    # Wait for all tests to finish
     if cleanup:
         avocado.runTest("cleanup").wait_for()
```

Would produce something like

```
2. bar: PASS
1. foo: FAIL
3. baz: PASS
4. cleanup: PASS
```

or

```
1. foo: PASS
2. bar: ERROR
3. baz: PASS
```

And the json/xunit output would report the tests as they were executed.

Regards,
Lukáš


>
>
> Regards,
> Olav P. Henschel
>
> _______________________________________________
> Avocado-devel mailing list
> Avocado-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/avocado-devel




More information about the Avocado-devel mailing list