[Avocado-devel] Avocado 101 questions

Fajun Chen fajun.chen at seagate.com
Wed Jan 30 07:52:59 UTC 2019


Hi Lukas,

Thank you very much for your support. I have some additional questions as
listed below.

On Mon, Jan 28, 2019 at 9:35 AM Lukáš Doktor <ldoktor at redhat.com> wrote:

> Dne 23. 01. 19 v 18:54 Fajun Chen napsal(a):
> > Hi,
> >
>
> Hello Fajun Chen,
>
> I'm sorry for the delay, I had been on DevConf Brno this weekend.
>
> > I just got started on Avocado. I was able to build, list and run tests
> > without issues after reading the documents. But I still have some
> > questions, which seems not covered by the documents.
> >
> >
> >    1. How to define the test sequence in yaml/json? An example file with
> >    test references and test parameters would be very helpful.
>
> Simplest is to enumerate the tests on cmdline. We do support filters so
> people can specify tests_methods, classes, files or directories, where the
> order is preserved (alphabetical for files/dirs and according to location
> for test_methods/classes).
> https://avocado-framework.readthedocs.io/en/latest/Loaders.html
>
> For finer-granularity we have `yaml_loader`
> https://avocado-framework.readthedocs.io/en/latest/optional_plugins/yaml_loader.html
> that allows specifying tests in a yaml file. Basic usage is:
>
> ```
> !mux
> 1:
>     test_reference: passtest.py
> 2:
>     test_reference: failtest.py
> ```
>
> but the format allows to define loaders, loader parameters, changing some
> avocado arguments and even directly modifying test params. I'm using it to
> run various external_runner tests together with avocado-vt tests using
> different params in a single job.
>
>
What's the syntax to set test params for a test (not test file) in the yaml
loader file. For instance, how to pass in sleep_cycles and sleep_length
into the test in sleeptenmin.py:
      test_reference: SleepTenMin.test
                 # how to set test params for the test?

Does the yaml loader support test discovery by tags?



> >    2. Could the tests in a test job be executed sequentially and in
> >    predefined order?
>
> See (1), there is a defined order.
>
> >    3. Could we skip subsequent tests when a test fails?
>
> Unfortunately not available as a feature yet. There is a workaround to use
> `teststmpdir` to create a file and in `setUp` phase of following test to
> check for it's presence and we do have an RFC to implement Job API, that
> should allow dynamic job definition
> https://trello.com/c/hRmShGIp/671-rfc-job-api but there is no ETA yet.
>
> >    4. How to abort test job upon critical test failure?
>
> The same situation. We don't have a framework support, one can use the
> same workaround to skip all remaining tests. Anyway this feature should be
> fairly simple to develop and I see a value in it. We could simply add
> CRITICAL test status and in such case abort the execution.
>
> Note if you mean to interrupt testing after any failed test, you can use
> https://avocado-framework.readthedocs.io/en/latest/GetStartedGuide.html#interrupting-the-job-on-first-failed-test-failfast


This fail fast feature can probably meet our needs assuming it supports the
tests from either command-line or yaml loader. Thanks.

>
>
> >    5. How to add custom code to handle exceptions from tests?
>
> Directly to the tests. Not sure what exactly you have in mind, but you can
> have your own utils library and use it in the tests.
>
> Note if you mean test failure vs. test error, we do have some decorators
> https://avocado-framework.readthedocs.io/en/latest/WritingTests.html#turning-errors-into-failures
> to declare certain exceptions as test failure, rather than anonymous test
> errors.
>

We would like to catch the exceptions from tests and take some actions such
as collecting error logs. Should this custom code raise the exception again
so Avocado can handle it?   Would be nice to hide this from tests. Thinking
about creating a subclass of avocado.test and add  custom exception handler
there.


>
> >    6. rsync is used to upload test logs to remote server, which is
> started
> >    after test completion. How to upload test logs incrementally for long
> >    running test before it completes
>
> Avocado supports streamline results like tap or journal. We don't have a
> plugin to upload individual test results (or even to stream the files as
> the test runs), but we could give you pointers to develop such plugin. Note
> if you only need to log files after test execution, the plugin would be as
> simple as:
>
>     def test_end(self, test):
>         self.publish(test.job.logdir)
>
> where publish would simply rsync the logdir to your server. Logging files
> as the test goes would be probably a bit harder and we'd have to think
> about it. One way is to add logger, but that way you only get logs and not
> the produced files. Other would be to have plugin that would create a
> thread and keep syncing the logdir. Again, should not be that hard, we
> could give you pointers but not sure how significant is it currently for us.
>

Thanks for the ideas. We could start with rsync at test end and add feature
for periodical sync later for long running tests.

>
> >    7. How to run multiple test jobs in parallel?
>
> Unfortunately current runner allows only a single job per execution.
> Anyway running multiple avocado instances at the same time is not a
> problem. This is another goal of the Job API RFC mentioned in answer (3)
>
> >    8. The test logs from low level lib modules are captured in job.log
> >    instead of in test specific directories.Just wonder if there're hooks
> in
> >    place to customize the logger so more test data are reported in test
> >    specific logs.
>
> I'm not sure I understand this properly. There is the `$results/job.log`
> file that should contain some job logs + all logs from tests. Then there
> are the per-test `$results/test-results/*/debug.log` logs that contain the
> part of `job.log` since the test started until the test ends. It should not
> contain more, nor less (with one exception and that is interrupted test,
> where extra messages might get injected out-of-place). If that's not the
> case than we have a bug and would like to ask you for a reproducer.
>

Thanks for the clarification. I'll double check to see if the logs are
organized per design. Will let you know.


>
> Also note that the test is a python script, so "evil" test might re-define
> it's loggers. Anyway even "evil" test should not affect the main Avocado
> loggers, nor the other executed tests. We do have a strict boundary between
> runner and each tests (they are actually a different processes) as in other
> frameworks we had problems with tests affecting other tests.
>

I understood the rationale behind running each test in a separate process
now. This does pose problem for manual/interactive testing where we need
user input during a test. How to support this in Avocado?

>
> All in all, Avocado is fairly established project now, we still have the
> JobAPI coming, persistent test (allowing reboot during test execution) and
> multi-stream (multi-host) testing in progress. Current LTS version is
> primarily python2-compatible, but for a longish period the master switched
> to python3 by default with backward compatibility and we are about to
> release it as another LTS (LTS releases are maintained for about 1.5 years,
> there is a overlap with previous LTS to allow smooth transition and we do
> have guidelines on what changed between versions and how to adjust). While
> evaluating definitely checkout `avocado.utils.*` where we have several
> useful utils to speedup writing tests and also the Aexpect sub-project
> https://github.com/avocado-framework/aexpect that is inspired by Expect
> language and inherits from pexpect project, but is improved (it was
> actually forked because of slow pexpect development) and could be useful
> even when you decide not to use Avocado itself.
>

Thanks again for the info. Will explore these utils. I'm impressed with the
modular architecture of Avocado. Kudos to the development team! I have some
items on my wish list:
- Web front end
- Host management - Add tags to test hosts
- Test scheduling -  Allow test planning based on test tags and host tags
These may sound familiar to your guys from Autotest project. Would be great
to know your thoughts for Avocado.

Thanks,
Fajun
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/avocado-devel/attachments/20190130/29ea4f5f/attachment.htm>


More information about the Avocado-devel mailing list