[Avocado-devel] Avocado 101 questions

Fajun Chen fajun.chen at seagate.com
Wed Feb 6 06:22:15 UTC 2019


Hi Lukas,

As an experiment, I made the following changes to avocado/core/runner.py:
@@ -319,7 +319,7 @@
         # be able to read from the tty, and would hang. Let's replace
         # STDIN fd (0), with the same fd previously set by
         # `multiprocessing.Process()`
-        os.dup2(sys.stdin.fileno(), 0)
+        # os.dup2(sys.stdin.fileno(), 0)

         instance = loader.load_test(test_factory)
         if instance.runner_queue is None:
@@ -349,7 +349,10 @@
                         TEST_LOG.info('  %s: %s', source, location)
                 TEST_LOG.info('')
         try:
+            sys.stdin = open(0)
             instance.run_avocado()
+            sys.stdin.close()
+            sys.stdin = open(os.devnull)
         finally:
             try:
                 state = instance.get_state()

I can interact with the test manually with the change. Understood the risk
of keeping stdin fd open, but will it be manageable if our tests don't read
from tty unless it's warranted?

Thanks,
Fajun

On Tue, Feb 5, 2019 at 6:26 AM Lukáš Doktor <ldoktor at redhat.com> wrote:

> Dne 03. 02. 19 v 19:30 Fajun Chen napsal(a):
> > Hi Lukas,
> >
> > Sorry for the late reply. I was pulled into other work. Please see my
> > comments below.
> >
>
> No problem, we all have tasks to do...
>
> > On Wed, Jan 30, 2019 at 11:23 AM Lukáš Doktor <ldoktor at redhat.com>
> wrote:
> >
> >>>>>
> >>>>>    1. How to define the test sequence in yaml/json? An example file
> >> with
> >>>>>    test references and test parameters would be very helpful.
> >>>>
> >>>> Simplest is to enumerate the tests on cmdline. We do support filters
> so
> >>>> people can specify tests_methods, classes, files or directories, where
> >> the
> >>>> order is preserved (alphabetical for files/dirs and according to
> >> location
> >>>> for test_methods/classes).
> >>>> https://avocado-framework.readthedocs.io/en/latest/Loaders.html
> >>>>
> >>>> For finer-granularity we have `yaml_loader`
> >>>>
> >>
> https://avocado-framework.readthedocs.io/en/latest/optional_plugins/yaml_loader.html
> >>>> that allows specifying tests in a yaml file. Basic usage is:
> >>>>
> >>>> ```
> >>>> !mux
> >>>> 1:
> >>>>     test_reference: passtest.py
> >>>> 2:
> >>>>     test_reference: failtest.py
> >>>> ```
> >>>>
> >>>> but the format allows to define loaders, loader parameters, changing
> >> some
> >>>> avocado arguments and even directly modifying test params. I'm using
> it
> >> to
> >>>> run various external_runner tests together with avocado-vt tests using
> >>>> different params in a single job.
> >>>>
> >>>>
> >>> What's the syntax to set test params for a test (not test file) in the
> >> yaml
> >>> loader file. For instance, how to pass in sleep_cycles and sleep_length
> >>> into the test in sleeptenmin.py:
> >>>       test_reference: SleepTenMin.test
> >>>                  # how to set test params for the test?
> >>>
> >>
> >> The yaml_loader iterates through the resulting params and looks for
> >> "test_reference" and other key-words (see
> >>
> https://avocado-framework.readthedocs.io/en/latest/optional_plugins/yaml_loader.html
> >> for the complete list). In the end it adds all the resulting tests (as
> >> multiple ones can match) to the job attaching the current params slice
> to
> >> it. What it means is that each test gets the "test_reference" and all
> other
> >> variables of the current slice defined in the yaml file):
> >>
> >> # loader.yaml
> >> !mux
> >> test1:
> >>     test_reference: passtest.py
> >>     foo: bar
> >> test2:
> >>     test_reference: passtest.py
> >>     foo: baz
> >>
> >> $ avocado --show all run loader.yaml --dry-run
> >> ...
> >> avocado.test: INIT 1-passtest.py:PassTest.test;-e630
> >> ...
> >> avocado.test: Test params:
> >> avocado.test: /run/test1:test_reference ==> passtest.py
> >> avocado.test: /run/test1:foo ==> bar
> >> ...
> >> avocado.test: INIT 2-passtest.py:PassTest.test;-e630
> >> ...
> >> avocado.test: Test params:
> >> avocado.test: /run/test2:test_reference ==> passtest.py
> >> avocado.test: /run/test2:foo ==> baz
> >>
> >> (note, the `--dry-run` is one of the neat features where you can see all
> >> globally-available params)
> >
> >
> >
> >>
> >
> > This works as suggested. Thanks for the tips
> >
> >>
> >
> >> Does the yaml loader support test discovery by tags?
> >>>
> >>
> >> It is possible to discover directories in file-loader, but unfortunately
> >> it is not possible to override filter-by-tag per variant. What I mean is
> >> you can only specify one `filter-by-tag` globally via
> `--filter-by-tags` on
> >> the command line and it will be applied to all yaml_loader references.
> >>
> >> This is because filter-by-tags feature had been implemented as global
> >> filter on the resulting suite, but your request seems valid and we
> should
> >> probably re-evaluate and make it part of the loaders. It would require
> some
> >> slightly deeper changes to the loaders as each of them would have to do
> the
> >> filtering during the discovery (which shouldn't be that hard as all
> plugins
> >> "should" inherit from the base TestLoader so implementation there should
> >> serve as the basis for all of them).
> >>
> >> Anyway let's discuss it on release/planning meeting
> >> https://trello.com/c/poL0jWIi/1487-rfc-move-filter-by-tag-to-loaders
> >> (feel free to join, but don't feel obligated, we'll discuss it anyway).
> >>>
> >>
> >
> > Test planning and test execution are done by different teams in our test
> > process. Would be nice to expand the yaml loader to include more
> > functionality so it can be a one-stop shop for test planning.  For
> > instance, in addition to having filter-by-tag capability, could it
> support
> > other command-line options such as failfast in the yaml file? The test
> > sequencing capability provided by yaml loader seems to be very limited.
> For
> > instance, how to define the loop count of a test? how to define the
> > dependency or the conditional execution of tests in a sequence?
> >
>
> The way the loader works currently it's not possible to turn-on fail-fast
> per group of tests (if that's what you meant by the question). It's a
> global option per job. The possibility of dynamic number of tests based on
> their execution is considered for Job API, but that is not yet available
> and relying on internal API would lead to breakages between versions.
>
> As for extra features, I always wanted to add range of things (eg. list of
> `test_reference`s, or list of values) but never had time to do so, so
> currently it's just a pure yaml file (with some extra tags). What I mean is
> that each variant has to have a unique name (the `test1:` and `test2:` in
> the previous example, using the same name results in merging of their
> content). In my testing I realized it's actually a good think because I can
> categorize and filter the tests easily.
>
> The loop of tests is the only place where I miss the possibility of using
> a counter. It could be added (would be actually a matter of ~3 lines of
> code) to yaml2mux using a special keyword (something like
> `test_reference_iterations`). Anyway even now it's possible to define
> iterations the same way as tests. Let's use the previous example:
>
> ```
> # loader.yaml
> tests: !mux
>     test1:
>         test_reference: passtest.py
>         foo: bar
>     test2:
>         test_reference: passtest.py
>         foo: baz
>         iterations: !mux
>             1:
>             2:
> iterations: !mux
>     1:
>     2:
>     3:
>     4:
>     5:
> ```
>
> Which should generate 5 iterations of test1 and 2x5 iterations of test2.
> It's not super-nice, but does the job. In my CI I'm generating these from
> bash so I don't really mind the verbosity, but the
> `test_reference_iterations` would simplify things there).
>
>
> Note you can put your tests deeper into the structure and create multiple
> leaves:
>
>
> ```
> tests: !mux
>     sanity: !mux
>         foo:
>         bar:
>         baz
>     integration: !mux
> ...
> ```
>
> and you can define some defaults outside of this structure. Basically
> yaml_loader simply uses the yaml_to_mux plugin's capability to iterate
> through the params and the only difference is that it attempts to get
> certain keys from the resulting params, discovers the test(s) and
> associates the params-slice to that tests. You can use the
> `--mux-suite-only|--mux-suite-out` to get only variants you're interested
> in so you can have one big file and filter it before execution (eg. to
> execute only the `sanity` tests. Note, filters might seem odd but there are
> reasons why they act like they act so do read the documentation and try it
> out before saying they are broken.
>
> >>
> >>>>>    2. Could the tests in a test job be executed sequentially and in
> >>>>>    predefined order?
> >>>>
> >>>> See (1), there is a defined order.
> >>>>
> >>>>>    3. Could we skip subsequent tests when a test fails?
> >>>>
> >>>> Unfortunately not available as a feature yet. There is a workaround to
> >> use
> >>>> `teststmpdir` to create a file and in `setUp` phase of following test
> to
> >>>> check for it's presence and we do have an RFC to implement Job API,
> that
> >>>> should allow dynamic job definition
> >>>> https://trello.com/c/hRmShGIp/671-rfc-job-api but there is no ETA
> yet.
> >>>>
> >>>>>    4. How to abort test job upon critical test failure?
> >>>>
> >>>> The same situation. We don't have a framework support, one can use the
> >>>> same workaround to skip all remaining tests. Anyway this feature
> should
> >> be
> >>>> fairly simple to develop and I see a value in it. We could simply add
> >>>> CRITICAL test status and in such case abort the execution.
> >>>>
> >>>> Note if you mean to interrupt testing after any failed test, you can
> use
> >>>>
> >>
> https://avocado-framework.readthedocs.io/en/latest/GetStartedGuide.html#interrupting-the-job-on-first-failed-test-failfast
> >>>
> >>>
> >>> This fail fast feature can probably meet our needs assuming it supports
> >> the
> >>> tests from either command-line or yaml loader. Thanks.
> >>>
> >>
> >> The source of the tests does not matter, that's a different level. The
> >> fail-fast feature is very simple and basically just interrupts the job
> >> whenever there is a failure. It won't look whether it's first, last or
> the
> >> only test in the queue. (it works the same way as python-unittest's
> >> --fail-fast)
> >>
> >
> > This matches what I thought. Thanks for the confirmation.
> >
> >
> >>>>
> >>>>
> >>>>>    5. How to add custom code to handle exceptions from tests?
> >>>>
> >>>> Directly to the tests. Not sure what exactly you have in mind, but you
> >> can
> >>>> have your own utils library and use it in the tests.
> >>>>
> >>>> Note if you mean test failure vs. test error, we do have some
> decorators
> >>>>
> >>
> https://avocado-framework.readthedocs.io/en/latest/WritingTests.html#turning-errors-into-failures
> >>>> to declare certain exceptions as test failure, rather than anonymous
> >> test
> >>>> errors.
> >>>>
> >>>
> >>> We would like to catch the exceptions from tests and take some actions
> >> such
> >>> as collecting error logs. Should this custom code raise the exception
> >> again
> >>> so Avocado can handle it?   Would be nice to hide this from tests.
> >> Thinking
> >>> about creating a subclass of avocado.test and add  custom exception
> >> handler
> >>> there.
> >>>
> >>
> >> This is definitely encouraged way, especially when you have many custom
> >> things to do in your testing. We even have a documentation on how to do
> >> that:
> >>
> https://avocado-framework.readthedocs.io/en/latest/SubclassingAvocado.html
> >>
> >> and our sub-project Avocado-vt might serve as an example of a very
> complex
> >> test (including custom loader, which might not be necessary in your
> case):
> >> https://github.com/avocado-framework/avocado-vt
> >>
> >> another example of simpler sub-class is the fedora-modularity:
> >>
> https://github.com/fedora-modularity/meta-test-family/blob/master/moduleframework/avocado_testers/avocado_test.py
> >> but disclaimer: "We have nothing to do with that project"
> >>
> >>
> > Thanks for the references. Will spend time for a proof-of-concept
> > implementation
> >
> >>
> >>>>
> >>>>>    6. rsync is used to upload test logs to remote server, which is
> >>>> started
> >>>>>    after test completion. How to upload test logs incrementally for
> >> long
> >>>>>    running test before it completes
> >>>>
> >>>> Avocado supports streamline results like tap or journal. We don't
> have a
> >>>> plugin to upload individual test results (or even to stream the files
> as
> >>>> the test runs), but we could give you pointers to develop such plugin.
> >> Note
> >>>> if you only need to log files after test execution, the plugin would
> be
> >> as
> >>>> simple as:
> >>>>
> >>>>     def test_end(self, test):
> >>>>         self.publish(test.job.logdir)
> >>>>
> >>>> where publish would simply rsync the logdir to your server. Logging
> >> files
> >>>> as the test goes would be probably a bit harder and we'd have to think
> >>>> about it. One way is to add logger, but that way you only get logs and
> >> not
> >>>> the produced files. Other would be to have plugin that would create a
> >>>> thread and keep syncing the logdir. Again, should not be that hard, we
> >>>> could give you pointers but not sure how significant is it currently
> >> for us.
> >>>>
> >>>
> >>> Thanks for the ideas. We could start with rsync at test end and add
> >> feature
> >>> for periodical sync later for long running tests.
> >>>
> >>
> >> Sure, btw if the job-log is enough for you, you can use `avocado --show
> >> test` to show the test log in console (or `--show all`).
> >>
> >> And speaking of loggers, you can define a certain logger in your tests
> and
> >> use it to mark important steps. We used to use something like that
> >> previously in Avocado-vt and with that log enabled in output (`avocado
> >> --show app,context`) would show something like:
> >>
> >> JOB ID     : d5eb807b581736ffa923c4305287af82a014718d
> >> JOB LOG    :
> >> /home/medic/avocado/job-results/job-2019-01-30T19.02-d5eb807/job.log
> >>  (1/1) io-github-autotest-qemu.boot: |
> >> Starting VM/
> >> SSH to VM-
> >> Shutting down VM\
> >> PASS (19.26 s)
> >> RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 |
> >> CANCEL 0
> >> JOB TIME   : 20.17 s
> >> JOB HTML   :
> >>
> /home/medic/avocado/job-results/job-2019-01-30T19.02-d5eb807/results.html
> >>
> >> Documentation about this feature is here:
> >>
> https://avocado-framework.readthedocs.io/en/latest/WritingTests.html#advanced-logging-capabilities
> >
> >
> > This is very similar to the Python logger configuration which we used for
> > internal testing. Good to know that Avocado has something similar or
> > better.
> >
> >
> >>
> >>
> >>>>
> >>>>>    7. How to run multiple test jobs in parallel?
> >>>>
> >>>> Unfortunately current runner allows only a single job per execution.
> >>>> Anyway running multiple avocado instances at the same time is not a
> >>>> problem. This is another goal of the Job API RFC mentioned in answer
> (3)
> >>>>
> >>>>>    8. The test logs from low level lib modules are captured in
> job.log
> >>>>>    instead of in test specific directories.Just wonder if there're
> >> hooks
> >>>> in
> >>>>>    place to customize the logger so more test data are reported in
> test
> >>>>>    specific logs.
> >>>>
> >>>> I'm not sure I understand this properly. There is the
> `$results/job.log`
> >>>> file that should contain some job logs + all logs from tests. Then
> there
> >>>> are the per-test `$results/test-results/*/debug.log` logs that contain
> >> the
> >>>> part of `job.log` since the test started until the test ends. It
> should
> >> not
> >>>> contain more, nor less (with one exception and that is interrupted
> test,
> >>>> where extra messages might get injected out-of-place). If that's not
> the
> >>>> case than we have a bug and would like to ask you for a reproducer.
> >>>>
> >>>
> >>> Thanks for the clarification. I'll double check to see if the logs are
> >>> organized per design. Will let you know.
> >>>
> >>
> >
> > Use test_env.py in Avocado examples/tests directory as an example. The
> logs
> > from log_std_io() function is in both job.log and per-test debug.log.
> > However, if the function is moved to a separate module like this
> >> cat log_io.py
> >
> > import logging
> > import os
> >
> > log = logging.getLogger(__name__)
> >
> > def log_std_io(name, std_io):
> >     log.debug('%s:', name.upper())
> >     log.debug(' sys.%s: %s', name, std_io)
> >     log.debug(' sys.%s is a tty: %s', name, std_io.isatty())
> >     if hasattr(std_io, 'fileno'):
> >         log.debug(' fd: %s', std_io.fileno())
> >         log.debug(' fd is tty: %s', os.isatty(std_io.fileno()))
> >     else:
> >         log.debug(' fd: not available')
> >         log.debug(' fd is a tty: can not determine, most possibly *not* a
> > tty')
> >
> > and test_env.py calls this as external function, the logs from this
> > function is in job.log, but NOT in per-test debug.log. Is this per
> design?
> >
>
> Well, this is a bug/gray area, remotely related to
> https://trello.com/c/j53LQn01/1064-bug-writing-to-stdout-fd-1-or-stderr-fd-2-should-not-affect-the-runner
> Let's create another card to double-check we cover that as well:
> https://trello.com/c/AdTNgchM/1488-bug-things-logged-outside-avocadotest-are-only-saved-in-joblog-and-not-in-tests-debuglog
>
> Also note that the test's `sys.stdout` is defined in `avocado.core.output`
> and is a LoggingFile instance that is redirected to `avocado.test.stdout`
> logger.
>
> >>
> >>>>
> >>>> Also note that the test is a python script, so "evil" test might
> >> re-define
> >>>> it's loggers. Anyway even "evil" test should not affect the main
> Avocado
> >>>> loggers, nor the other executed tests. We do have a strict boundary
> >> between
> >>>> runner and each tests (they are actually a different processes) as in
> >> other
> >>>> frameworks we had problems with tests affecting other tests.
> >>>>
> >>>
> >>> I understood the rationale behind running each test in a separate
> process
> >>> now. This does pose problem for manual/interactive testing where we
> need
> >>> user input during a test. How to support this in Avocado?
> >>>
> >>
> >> Well, we try to prevent that as much as we can. The STDIN is always
> closed
> >> in Avocado tests as we believe tests should be fully automated and we
> have
> >> aexpect https://github.com/avocado-framework/aexpect to interact with
> >> programs under testing.
> >>
> >> Still for debugging purposes it is possible to interact with the program
> >> via sockets. I'm using `pydev` to debug/single-step test execution and
> we
> >> also have support for gdb that interrupts the test and provides shell
> >> script to connect to the process when it reaches breakpoint. Similarly
> you
> >> can write your plugins to allow manual testing, but I'd rather you
> >> reconsidered that option and used random timeouts and aexpect....
> >>
> >>
> > This is one of the main road blockers on our list. We can't eliminate
> human
> > interaction in our test process. Could we pipe test subprocess standin
> and
> > standout through test runner? Would need out-of-band communication path
> if
> > not.
> >
>
> We're open for suggestions, but from the beginning we wanted to avoid
> human interaction in normal execution, while we wanted to allow great level
> of freedom for developers, which is why the test supports "stopped" status
> where human interaction can take place. Perhaps we could add the support
> for human interaction similarly to:
> https://avocado-framework.readthedocs.io/en/67.0/DebuggingWithGDB.html
>
> at this point I can only suggest using sockets, replacing test's
> `sys.stdin` with it and notify the user about the port using:
>
> ```
> from avocado.core.output import LOG_UI
>
> LOG_UI.info("Test interrupted, please use %s port to connect to stdin",
> port)
> ```
>
> that would be logged in the `avocado.app` logger, therefor in the UI. You
> might want to enable the stdout/stderr streams (or replace them with socket
> as well) in order to interact with the test.
>
> But feel free to send RFC with semi-interactive testing, we might give you
> some pointers and everyone would benefit.
>
> >
> >>>>
> >>>> All in all, Avocado is fairly established project now, we still have
> the
> >>>> JobAPI coming, persistent test (allowing reboot during test execution)
> >> and
> >>>> multi-stream (multi-host) testing in progress. Current LTS version is
> >>>> primarily python2-compatible, but for a longish period the master
> >> switched
> >>>> to python3 by default with backward compatibility and we are about to
> >>>> release it as another LTS (LTS releases are maintained for about 1.5
> >> years,
> >>>> there is a overlap with previous LTS to allow smooth transition and we
> >> do
> >>>> have guidelines on what changed between versions and how to adjust).
> >> While
> >>>> evaluating definitely checkout `avocado.utils.*` where we have several
> >>>> useful utils to speedup writing tests and also the Aexpect sub-project
> >>>> https://github.com/avocado-framework/aexpect that is inspired by
> Expect
> >>>> language and inherits from pexpect project, but is improved (it was
> >>>> actually forked because of slow pexpect development) and could be
> useful
> >>>> even when you decide not to use Avocado itself.
> >>>>
> >>>
> >>> Thanks again for the info. Will explore these utils. I'm impressed with
> >> the
> >>> modular architecture of Avocado. Kudos to the development team! I have
> >> some
> >>> items on my wish list:
> >>
> >> Thank you, there are still some burdens to be refactored but overall I
> do
> >> like Avocado.
> >>
> >>> - Web front end
> >>
> >> There is the https://github.com/avocado-framework/avocado-server but it
> >> was just an experiment and we don't really had usage for that. But feel
> >> free to take that over.
> >>
> >
> > I have looked at this a while ago. Could use this as a starting point if
> we
> > go this route. Have you guys considered porting autotest web UI over?
> >
> >
>
> We have, that is why the avocado-server was created, but we moved towards
> Beaker/Jenkins integration instead as these services have years of
> development and are created for the tasks.
>
> >>
> >>> - Host management - Add tags to test hosts
> >>
> >> The same story, RH heavily depends on Beaker and Jenkins and these do
> that
> >> well. No point in re-inventing the wheel. I meant to write a guide on
> how
> >> to integrate Avocado with Jenkins but never had time to put things
> >> together:
> >>
> https://trello.com/c/9zTcfa0v/904-add-avocado-based-solution-for-jenkins-solutions-for-python
> >>
> >>> - Test scheduling -  Allow test planning based on test tags and host
> tags
> >>> These may sound familiar to your guys from Autotest project. Would be
> >> great
> >>> to know your thoughts for Avocado.
> >>>
> >>
> >> Well, we only have one tagging mechanism, but it allows list support,
> etc.
> >> It should be possible to configure your scheduler (Jenkins in my
> example)
> >> to insert host tags to filter-by-tags cmdline along with your custom
> >> filter-by-tags, which might result in what you're looking for. What I
> mean
> >> is:
> >>
> >> 1. Configure your hosts in Jenkins to export HOST_TAGS=... (for example
> >> "arch:x86_64 os:RHEL.8")
> >> 2. Configure the job to add parameter JOB_TAGS
> >> 3. In the Jenkins job use "avocado run ... --filter-by-tags $HOST_TAGS
> >> $JOB_TAGS ... -- $TESTS_REPOSITORY/"
> >>
> >> What do you think? (note the categorized tags documentation is here:
> >>
> https://avocado-framework.readthedocs.io/en/latest/WritingTests.html#using-further-categorization-with-keys-and-values
> >> )
> >>
> >>
> > Thanks for the ideas. Jenkins integration is on my todo list. Will work
> on
> > it and provide updates later.
> >
>
> IMO Jenkins is evil, but does the job. And there are many plugins to do
> the system provisioning etc. so I can only recommend it.
>
> Regards,
> Lukáš
>
> >
> >>
> >> PS: Release/planning meeting is next Monday, feel free to join to discus
> >> your questions/prioritization live (or let's continue on ML, whatever
> suits
> >> you)
> >>
> >>  Would like to join if my schedule allows. When is the meeting and how
> to
> > join?
> >
> > Thanks,
> > Fajun
> >
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/avocado-devel/attachments/20190205/61c32adb/attachment.htm>


More information about the Avocado-devel mailing list