[Avocado-devel] General test running questions

Lukáš Doktor ldoktor at redhat.com
Mon Dec 7 15:28:33 UTC 2015


Dne 3.12.2015 v 13:54 Olav Philipp Henschel napsal(a):
>
>
> On 02-12-2015 15:49, Lukáš Doktor wrote:
>> Dne 2.12.2015 v 07:29 Lucas Meneghel Rodrigues napsal(a):
>>>
>>>
>>> On Tue, Dec 1, 2015 at 1:38 PM Olav Philipp Henschel
>>> <olavph at linux.vnet.ibm.com <mailto:olavph at linux.vnet.ibm.com>> wrote:
>>>
>>>     I have a few questions regarding test running, the multiplexer
>>> and test
>>>     variants:
>>>
>>>     In avocado-vt, each test has a cartesian config file that is applied
>>>     only to that single test. Is there any plan to do something
>>> similar with
>>>     multiplex files? Sometimes I want to create variants to one test
>>> but not
>>>     the other.
>>>
>>>
>>> Strictly speaking, each individual .cfg file ends up merged on a goddamn
>>> giant subtests.cfg file.
>>>
>>> I'd say we want to support modularity between multiplex files, but I'm
>>> not sure if we already have an include system implemented there. Have
>>> we, Lukas?
>>
>> Hello Olav, as Lucas said, this is not supported. I send an RFC which
>> tried to make it possible to specify some default multiplexations year
>> ago:
>>
>> https://www.redhat.com/archives/virt-test-devel/2014-November/msg00004.html
>>
>>
>> but so far we had not time, nor consensus to really work on it.
>> Multiplexer is still quite young and we need to improve it. And it is
>> also possible, that we might come up with completely different design
>> (then the RFC, we won't probably change the multiplexer itself, it
>> seems to work quite well).
>
> Ok, as I understand, for now, if I want to apply variants to a single
> test, I'd have to run that test separately, right?
> It'd be nice to have test specific files or filters.
>
Right now this is not possible, nor planned for short-term future. You 
can try to come-up with RFC with use cases and example usage and we'll 
see what can we do about it. But I can't make any promises.

Regards,
Lukáš


>>
>>>
>>>     Is there currently any way of making a test dependent on another?
>>> For
>>>     example, if an unattended_install test fails, I want avocado to
>>> skip the
>>>     other tests.
>>>
>>>
>>> For avocado-vt, yes, sure. The cartesian config has provisions for that.
>>> However, for avocado-virt (AKA the next generation), we had endless
>>> discussions about that and reached the conclusion that having a
>>> dependency system is bad design. Each test should be able to run
>>> independently of other tests. If you need an installed vm for the tests,
>>> you should be able to specify that as a requirement for your test. The
>>> requires system is something still being designed.
>>
>> This question caused the delay as I don't have any official info yet.
>> The thing is that I'm also solving similar issues in our grid.
>> Currently I used jenkins pipelines to define the relations
>> (unattended_install -> tests | cleanup). It'd be definitely nice to
>> have it in avocado itself, but we need to come up with flexible, yet
>> simple to write definitions.
>>
>> I quite like the way pipeline works in jenkins, so I can imagine
>> similar approach. Alternatively we can reuse yaml to define the flow.
>>
>> To wrap it up, I don't have an official statement, I don't know
>> whether this is the priority right now, but I'd like to work on it.
>> Please bear with us (before Christmas there are many PTOs delaying the
>> discussions). We'd also welcome your ideas, or suggestions.
>
> I've opened an issue
> (https://github.com/avocado-framework/avocado-vt/issues/282), because,
> apparently, avocado-vt dependencies are not working.
> I will think about possible ways to support this.
>
>>
>>>
>>>     Is there any way of specifying a set of tests to run, instead of
>>> passing
>>>     each one of them? In avocado-vt, we can specify the test
>>> provider, or a
>>>     substring of the complete test name. It would be useful if we
>>> could pass
>>>     globs or something like that to execute all tests in a directory.
>>>
>>>
>>> You can pass the test directory to avocado. The test resolver will find
>>> and run all the tests inside that directory:
>>>
>>> avocado run examples/tests
>>> JOB ID     : 90da56e7af74bc493a6c8990d805c585380abc6b
>>> JOB LOG    :
>>> /home/lmr/avocado/job-results/job-2015-12-02T04.27-90da56e/job.log
>>> TESTS      : 57
>>>   (1/57) examples/tests/warntest.py:WarnTest.test: WARN (0.00 s)
>>>   (2/57) examples/tests/passtest.py:PassTest.test: PASS (0.00 s)
>>>   ...
>>>
>>> So I believe that covers your use case.
>>
>> Yep, we execute programs from shell, which means you can use shell to
>> specify what should be executed. Feel free to use:
>>
>>     avocado run examples/tests/a*
>>     avocado run examples/tests/doublefree*.py
>>     avocado run `find examples/ -iname 'a*.py' | grep -v "foo"`
>>
>> or other bashisms...
>
> Nice, I didn't know that worked.
>
> Thank you for your time,
> Olav P. Henschel
>
> _______________________________________________
> Avocado-devel mailing list
> Avocado-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/avocado-devel




More information about the Avocado-devel mailing list