[Avocado-devel] Fwd: Avocado future

Cleber Rosa crosa at redhat.com
Tue Dec 11 16:55:45 UTC 2018



On 12/11/18 10:23 AM, Lukáš Doktor wrote:
> Dne 04. 12. 18 v 16:40 Laurent Leuvrey napsal(a):
>> Hello,
>>
> 
> Hello Laurent,
> 
> I'm sorry for such a late response, I wanted to double-check with the core-team first before answering. First heads-up would be, internally we meet on Monday and every release there is a public meeting where you can get immediate response (and get it recorded, see https://www.youtube.com/channel/UC-RVZ_HFTbEztDM7wNY4NfA Another chance could be the avocado+qemu meeting every Tuesday https://www.redhat.com/archives/avocado-devel/2018-November/msg00018.html that is dedicated to avocado_qemu work, but I believe people will talk to you about regular Avocado issues as well.
> 

Yes, all points made by Lukáš are very valid ways of getting our attention.

Sometimes, as you could experience, I can fall a bit behind the mailing
list... sorry about that.

>>     Trying to gather information on what would be Avocado in the future, I initially sent the below mail to the developpers mailing list as it was the only way of contacting Avocado team I found in the doc. Then the email has been returned. That's why I would take the liberty to directly reach you.
>>

I think the reason is that we were getting a lot of SPAM, so we had to
close the list to subscribers.  I'll check if that was the reason, and
then evaluate if it's productive to make it open again.

>>     Best regards,
>>
>>             Laurent
>>
>> -------- Message transféré --------
>> Sujet :         Avocado future
>> Date :  Tue, 4 Dec 2018 16:15:12 +0100
>> De :    Laurent Leuvrey <laurent.leuvrey at arm.com><mailto:laurent.leuvrey at arm.com>
>> Pour :  avocado-devel at redhat.com<mailto:avocado-devel at redhat.com>
>>
> 
> This is odd, don't know why it rejected you. Maybe try subscribing there first.
> 

Like I said before, I *think* it's a subscription issue indeed.  I'll check.

>>
>> Hello,
>>
>>     As I'm looking for a testing framework suite and Avocado is very appealing regarding the functional test, its active development and quickly evolving features. Then I went through documentation, forum and video to know what will be Avocado's future, if its testing abilities will grow. And I still got some questions regarding Avocado's evolution :
> 
> We definitely plan on improving Avocado, RFEs are warmly welcome as well as contributions. Lately we focused more on avocado+qemu, but we have other teams at RH using it for userspace/kernel testing and there are other companies using it for various testing as well.
> 
>>
>>     - is there a plan to support unit testing features through plugins for tools such as xUnit tools (JUnit, CppUnit...)
>>
> 
> Integrating with different unit-testing tools should be fairly simple, we have glib/golang/python-unittest/robot loader plugins so adding new ones should be doable.
> 
>>     - will Avocado be able to use distributed computing (LSF, Cloud...) for parallel testing jobs submission for instance
>>
> 
> I'm not aware of any plans to do distributed parallel testing as we usually use Avocado as the "man-in-the-middle" and Jenkins as the tool to decide where and how to run. Anyway you can split jobs and gather results as a collection of xunit/tap, but sure, a way to balance this would be nice. Again, RFE is a nice way to get feedback. (I know there is already EC2 plugin and somewhere I saw Azure plugin as well, but don't have real experience with these)
> 

For parallel jobs, Lukáš is absolutely right.  Now for *tests*, things
are a bit more complicated.

One *big* problem with Avocado right now is how tightly the test process
is to the runner.  And, for the remote "test" runners, we basically fake
it: we have remote *job* runners.  My idea, and I've written some notes
and code along that line, is to properly define the interface between
the test runner, and the entity that collects the results (Today, both
of those tasks are done by the "avocado" command line utility).

For instance, a utility to run a tests, let's call it "avocado_test_run"
would be instructed where to report results to: for instance, to its
STDIN, to a UNIX socket, to an HTTP server, etc.  Something like:

[Avocado Job Runner, aka "avocado"]
              |
              +---- avocado_test_run --report-to-stdin path/to/test.py
              +---- avocado_test_run --report-to-stdin path/to/other.py


If "path/to/test.py" crashes, "avocado_test_run" would translate that to
"{'result': 'ERROR', 'failure_reason': '<TRACEBACK HERE>'}" and write
that into its STDIN.

This alone would let us run *tests*, and not *jobs* not only remotely
(for instance "ssh remote-machine avocado_test_run --report-to-stdin
path/to/test.py" would behave just the same), but also in parallel.


>>     - currently, there are some wrappers for tools such as Valgrind. It allows to instrument a test run. Will it be possible, thanks to Avocado command line options, to ask Avocado to directly instrument the tested command itself. For instance, you want to test the "sleep" command, then you write a sleepTest.py "around" this command. For now Valgrind will estimate the sleepTest.py memory usage but not the system sleep itself.
>>


IIUC, you want to select which commands get executed with which
wrappers, but instead of doing so "transparently" (to the test) in the
command line, you'd do that explicitly in the test.  Is that right?

If so, I believe it'd be possible to use the
"avocado.utils.process.WrapSubProcess" class... but AFAICT, we haven't
tried that yet.

> 
> I like these as well as providing detailed feedback on failed tests is as simple as re-running it with certain tools enabled. Not sure what you really want but I think wrappers work the way you want. When you write a test, whenever you use `avocado.utils.process` library to execute anything, that is the point where you can hook your wrapper/gdb. The simplest demonstration would be:
> 
>     avocado --show all run --external-runner /bin/sleep --wrapper "/bin/time:*sleep" -- 1
> 
> Which won't run the whole "ExternalRunnerTest" via `/bin/time`, but it will execute the ExternalRunnerTest, which uses `process.run` to execute the desired binary. Now the `--wrapper $script:binary` makes sure whenever `process.run` attempts to run matching binary ("*sleep" in this example) it uses the wrapper.
> 
> Now when you create a complex test with many `process.run` invocations, each of them will check for the binary and on match uses that wrapper/gdb. Btw gdb is even cooler than wrappers as it allows normal execution, reports stdout/stderr and everything and only on breakpoint you get a nice UI message how to connect to the gdb server. Very useful for thins that require many iterations to reproduce.
> 

Right, these are good examples.

I think the main message I'd leave here is: we're really welcome to
adjust the direction of features or of the core of the Avocado project
itself based on real world use requirements.  While we spent some time
developing features that are of hypothetical use, we're now a lot more
focused on delivering features that solve real problems we're facing
ourselves (see the QEMU example).

It'd be nice to have more people joining, and sharing the steering wheel.

Regards!
- Cleber.

> Best regards,
> Lukáš
> 
>>     Best regards,
>>
>>                 Laurent
>>
>>
>> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
>>
> 
> 

-- 
Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]




More information about the Avocado-devel mailing list