[Avocado-devel] Avocado 101 questions

Lukáš Doktor ldoktor at redhat.com
Wed Jan 30 18:23:29 UTC 2019


Dne 30. 01. 19 v 8:52 Fajun Chen napsal(a):
> Hi Lukas,
> 
> Thank you very much for your support. I have some additional questions as
> listed below.
> 

Hello Fajun Chen,

> On Mon, Jan 28, 2019 at 9:35 AM Lukáš Doktor <ldoktor at redhat.com> wrote:
> 
>> Dne 23. 01. 19 v 18:54 Fajun Chen napsal(a):
>>> Hi,
>>>
>>
>> Hello Fajun Chen,
>>
>> I'm sorry for the delay, I had been on DevConf Brno this weekend.
>>
>>> I just got started on Avocado. I was able to build, list and run tests
>>> without issues after reading the documents. But I still have some
>>> questions, which seems not covered by the documents.
>>>
>>>
>>>    1. How to define the test sequence in yaml/json? An example file with
>>>    test references and test parameters would be very helpful.
>>
>> Simplest is to enumerate the tests on cmdline. We do support filters so
>> people can specify tests_methods, classes, files or directories, where the
>> order is preserved (alphabetical for files/dirs and according to location
>> for test_methods/classes).
>> https://avocado-framework.readthedocs.io/en/latest/Loaders.html
>>
>> For finer-granularity we have `yaml_loader`
>> https://avocado-framework.readthedocs.io/en/latest/optional_plugins/yaml_loader.html
>> that allows specifying tests in a yaml file. Basic usage is:
>>
>> ```
>> !mux
>> 1:
>>     test_reference: passtest.py
>> 2:
>>     test_reference: failtest.py
>> ```
>>
>> but the format allows to define loaders, loader parameters, changing some
>> avocado arguments and even directly modifying test params. I'm using it to
>> run various external_runner tests together with avocado-vt tests using
>> different params in a single job.
>>
>>
> What's the syntax to set test params for a test (not test file) in the yaml
> loader file. For instance, how to pass in sleep_cycles and sleep_length
> into the test in sleeptenmin.py:
>       test_reference: SleepTenMin.test
>                  # how to set test params for the test?
> 

The yaml_loader iterates through the resulting params and looks for "test_reference" and other key-words (see https://avocado-framework.readthedocs.io/en/latest/optional_plugins/yaml_loader.html for the complete list). In the end it adds all the resulting tests (as multiple ones can match) to the job attaching the current params slice to it. What it means is that each test gets the "test_reference" and all other variables of the current slice defined in the yaml file):

# loader.yaml
!mux
test1:
    test_reference: passtest.py
    foo: bar
test2:
    test_reference: passtest.py
    foo: baz

$ avocado --show all run loader.yaml --dry-run
...
avocado.test: INIT 1-passtest.py:PassTest.test;-e630
...
avocado.test: Test params:
avocado.test: /run/test1:test_reference ==> passtest.py
avocado.test: /run/test1:foo ==> bar
...
avocado.test: INIT 2-passtest.py:PassTest.test;-e630
...
avocado.test: Test params:
avocado.test: /run/test2:test_reference ==> passtest.py
avocado.test: /run/test2:foo ==> baz

(note, the `--dry-run` is one of the neat features where you can see all globally-available params)


> Does the yaml loader support test discovery by tags?
> 

It is possible to discover directories in file-loader, but unfortunately it is not possible to override filter-by-tag per variant. What I mean is you can only specify one `filter-by-tag` globally via `--filter-by-tags` on the command line and it will be applied to all yaml_loader references.

This is because filter-by-tags feature had been implemented as global filter on the resulting suite, but your request seems valid and we should probably re-evaluate and make it part of the loaders. It would require some slightly deeper changes to the loaders as each of them would have to do the filtering during the discovery (which shouldn't be that hard as all plugins "should" inherit from the base TestLoader so implementation there should serve as the basis for all of them).

Anyway let's discuss it on release/planning meeting https://trello.com/c/poL0jWIi/1487-rfc-move-filter-by-tag-to-loaders (feel free to join, but don't feel obligated, we'll discuss it anyway).

> 
> 
>>>    2. Could the tests in a test job be executed sequentially and in
>>>    predefined order?
>>
>> See (1), there is a defined order.
>>
>>>    3. Could we skip subsequent tests when a test fails?
>>
>> Unfortunately not available as a feature yet. There is a workaround to use
>> `teststmpdir` to create a file and in `setUp` phase of following test to
>> check for it's presence and we do have an RFC to implement Job API, that
>> should allow dynamic job definition
>> https://trello.com/c/hRmShGIp/671-rfc-job-api but there is no ETA yet.
>>
>>>    4. How to abort test job upon critical test failure?
>>
>> The same situation. We don't have a framework support, one can use the
>> same workaround to skip all remaining tests. Anyway this feature should be
>> fairly simple to develop and I see a value in it. We could simply add
>> CRITICAL test status and in such case abort the execution.
>>
>> Note if you mean to interrupt testing after any failed test, you can use
>> https://avocado-framework.readthedocs.io/en/latest/GetStartedGuide.html#interrupting-the-job-on-first-failed-test-failfast
> 
> 
> This fail fast feature can probably meet our needs assuming it supports the
> tests from either command-line or yaml loader. Thanks.
> 

The source of the tests does not matter, that's a different level. The fail-fast feature is very simple and basically just interrupts the job whenever there is a failure. It won't look whether it's first, last or the only test in the queue. (it works the same way as python-unittest's --fail-fast)

>>
>>
>>>    5. How to add custom code to handle exceptions from tests?
>>
>> Directly to the tests. Not sure what exactly you have in mind, but you can
>> have your own utils library and use it in the tests.
>>
>> Note if you mean test failure vs. test error, we do have some decorators
>> https://avocado-framework.readthedocs.io/en/latest/WritingTests.html#turning-errors-into-failures
>> to declare certain exceptions as test failure, rather than anonymous test
>> errors.
>>
> 
> We would like to catch the exceptions from tests and take some actions such
> as collecting error logs. Should this custom code raise the exception again
> so Avocado can handle it?   Would be nice to hide this from tests. Thinking
> about creating a subclass of avocado.test and add  custom exception handler
> there.
> 

This is definitely encouraged way, especially when you have many custom things to do in your testing. We even have a documentation on how to do that: https://avocado-framework.readthedocs.io/en/latest/SubclassingAvocado.html

and our sub-project Avocado-vt might serve as an example of a very complex test (including custom loader, which might not be necessary in your case): https://github.com/avocado-framework/avocado-vt

another example of simpler sub-class is the fedora-modularity: https://github.com/fedora-modularity/meta-test-family/blob/master/moduleframework/avocado_testers/avocado_test.py but disclaimer: "We have nothing to do with that project"

> 
>>
>>>    6. rsync is used to upload test logs to remote server, which is
>> started
>>>    after test completion. How to upload test logs incrementally for long
>>>    running test before it completes
>>
>> Avocado supports streamline results like tap or journal. We don't have a
>> plugin to upload individual test results (or even to stream the files as
>> the test runs), but we could give you pointers to develop such plugin. Note
>> if you only need to log files after test execution, the plugin would be as
>> simple as:
>>
>>     def test_end(self, test):
>>         self.publish(test.job.logdir)
>>
>> where publish would simply rsync the logdir to your server. Logging files
>> as the test goes would be probably a bit harder and we'd have to think
>> about it. One way is to add logger, but that way you only get logs and not
>> the produced files. Other would be to have plugin that would create a
>> thread and keep syncing the logdir. Again, should not be that hard, we
>> could give you pointers but not sure how significant is it currently for us.
>>
> 
> Thanks for the ideas. We could start with rsync at test end and add feature
> for periodical sync later for long running tests.
> 

Sure, btw if the job-log is enough for you, you can use `avocado --show test` to show the test log in console (or `--show all`).

And speaking of loggers, you can define a certain logger in your tests and use it to mark important steps. We used to use something like that previously in Avocado-vt and with that log enabled in output (`avocado --show app,context`) would show something like:

JOB ID     : d5eb807b581736ffa923c4305287af82a014718d
JOB LOG    : /home/medic/avocado/job-results/job-2019-01-30T19.02-d5eb807/job.log
 (1/1) io-github-autotest-qemu.boot: |
Starting VM/
SSH to VM-
Shutting down VM\
PASS (19.26 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 20.17 s
JOB HTML   : /home/medic/avocado/job-results/job-2019-01-30T19.02-d5eb807/results.html

Documentation about this feature is here: https://avocado-framework.readthedocs.io/en/latest/WritingTests.html#advanced-logging-capabilities

>>
>>>    7. How to run multiple test jobs in parallel?
>>
>> Unfortunately current runner allows only a single job per execution.
>> Anyway running multiple avocado instances at the same time is not a
>> problem. This is another goal of the Job API RFC mentioned in answer (3)
>>
>>>    8. The test logs from low level lib modules are captured in job.log
>>>    instead of in test specific directories.Just wonder if there're hooks
>> in
>>>    place to customize the logger so more test data are reported in test
>>>    specific logs.
>>
>> I'm not sure I understand this properly. There is the `$results/job.log`
>> file that should contain some job logs + all logs from tests. Then there
>> are the per-test `$results/test-results/*/debug.log` logs that contain the
>> part of `job.log` since the test started until the test ends. It should not
>> contain more, nor less (with one exception and that is interrupted test,
>> where extra messages might get injected out-of-place). If that's not the
>> case than we have a bug and would like to ask you for a reproducer.
>>
> 
> Thanks for the clarification. I'll double check to see if the logs are
> organized per design. Will let you know.
> 
> 
>>
>> Also note that the test is a python script, so "evil" test might re-define
>> it's loggers. Anyway even "evil" test should not affect the main Avocado
>> loggers, nor the other executed tests. We do have a strict boundary between
>> runner and each tests (they are actually a different processes) as in other
>> frameworks we had problems with tests affecting other tests.
>>
> 
> I understood the rationale behind running each test in a separate process
> now. This does pose problem for manual/interactive testing where we need
> user input during a test. How to support this in Avocado?
> 

Well, we try to prevent that as much as we can. The STDIN is always closed in Avocado tests as we believe tests should be fully automated and we have aexpect https://github.com/avocado-framework/aexpect to interact with programs under testing.

Still for debugging purposes it is possible to interact with the program via sockets. I'm using `pydev` to debug/single-step test execution and we also have support for gdb that interrupts the test and provides shell script to connect to the process when it reaches breakpoint. Similarly you can write your plugins to allow manual testing, but I'd rather you reconsidered that option and used random timeouts and aexpect....

>>
>> All in all, Avocado is fairly established project now, we still have the
>> JobAPI coming, persistent test (allowing reboot during test execution) and
>> multi-stream (multi-host) testing in progress. Current LTS version is
>> primarily python2-compatible, but for a longish period the master switched
>> to python3 by default with backward compatibility and we are about to
>> release it as another LTS (LTS releases are maintained for about 1.5 years,
>> there is a overlap with previous LTS to allow smooth transition and we do
>> have guidelines on what changed between versions and how to adjust). While
>> evaluating definitely checkout `avocado.utils.*` where we have several
>> useful utils to speedup writing tests and also the Aexpect sub-project
>> https://github.com/avocado-framework/aexpect that is inspired by Expect
>> language and inherits from pexpect project, but is improved (it was
>> actually forked because of slow pexpect development) and could be useful
>> even when you decide not to use Avocado itself.
>>
> 
> Thanks again for the info. Will explore these utils. I'm impressed with the
> modular architecture of Avocado. Kudos to the development team! I have some
> items on my wish list:

Thank you, there are still some burdens to be refactored but overall I do like Avocado.

> - Web front end

There is the https://github.com/avocado-framework/avocado-server but it was just an experiment and we don't really had usage for that. But feel free to take that over.

> - Host management - Add tags to test hosts

The same story, RH heavily depends on Beaker and Jenkins and these do that well. No point in re-inventing the wheel. I meant to write a guide on how to integrate Avocado with Jenkins but never had time to put things together: https://trello.com/c/9zTcfa0v/904-add-avocado-based-solution-for-jenkins-solutions-for-python

> - Test scheduling -  Allow test planning based on test tags and host tags
> These may sound familiar to your guys from Autotest project. Would be great
> to know your thoughts for Avocado.
> 

Well, we only have one tagging mechanism, but it allows list support, etc. It should be possible to configure your scheduler (Jenkins in my example) to insert host tags to filter-by-tags cmdline along with your custom filter-by-tags, which might result in what you're looking for. What I mean is:

1. Configure your hosts in Jenkins to export HOST_TAGS=... (for example "arch:x86_64 os:RHEL.8")
2. Configure the job to add parameter JOB_TAGS
3. In the Jenkins job use "avocado run ... --filter-by-tags $HOST_TAGS $JOB_TAGS ... -- $TESTS_REPOSITORY/"

What do you think? (note the categorized tags documentation is here: https://avocado-framework.readthedocs.io/en/latest/WritingTests.html#using-further-categorization-with-keys-and-values )

Regards,
Lukáš

PS: Release/planning meeting is next Monday, feel free to join to discus your questions/prioritization live (or let's continue on ML, whatever suits you)

> Thanks,
> Fajun
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <http://listman.redhat.com/archives/avocado-devel/attachments/20190130/8697f526/attachment.sig>


More information about the Avocado-devel mailing list