[Avocado-devel] Test assumptions question.

Lucas Meneghel Rodrigues lookkas at gmail.com
Tue Sep 13 15:21:51 UTC 2016


On Tue, Sep 13, 2016 at 10:15 AM Dmitry Monakhov <dmonlist at gmail.com> wrote:

> Hi,
> I try to automate my kernel testing via avocado.
> I've faced some general design questions. And want to ask an advice.
>
> Testing procedure is quite general, and suites for any project.
> 1) Download source
> 2) configure kernel
> 3) Build kernel
> 4) Run several semantic checkers
> 5-N) Run tests for that kernel (inside qemu, so reboot is not necessary)
>      in my case this is xfstests-bld (filesystem test)
>
> * Questions about test iterations
> It is obvious that it is not good idea to place all functionality in
> single class. Actually it is reasonable to create following hierarchy
> linuxbuild.py: Does: 1,2,3
> sparse_check.py: 4
> xfstests-blk.py: 5
> ....
>
> But this means tests should be aware about there to find common data.
> sparse_check.py needs source of linux-kernel
> xfstests-bld.py needs to now where to find bzImage
> But each tests has it's own set of directories {workdir,srcdir, etc} and
> there is no convenient path where tests can share data.
> The only candidate I found is get_tmp_dir from core.data_dir module
> In fact avocado-misc-tests/memory/dma_memtest.py already use that
> interface. Is this correct way to do?
>

Yes. To be honest though, the use case where different tests were executed
in sequence sharing data was not something we considered at first. So using
get_tmp_dir you'll solve your problem, but that breaks the principle that
tests should be runnable on their own.

If I had a similar problem, I'd create a lib with the download, configure
and build procedures, and call them inside your tests (or setup methods),
in such a way that, if you only run sparse_check.py, for example, the code
checks if there is a source dir already built, and if there's not, it goes
and fetches the code and builds it. This way you still have your tests to
cooperate, still allow them to be run separately.


> BTW: If this is the case let's make some default install-prefix for
> tests, where tests can install their binaries
> Example:
> job_init()
> export  AVOCADO_PKG_PREFIX="job.get_tmp_dir() + 'pkg-prefix'
> export  PATH=$PATH:$AVOCADO_PKG_PREFIX/bin/:$AVOCADO_PKG_PREFIX/sbin
>
> Test1: ./configure --prefix=$AVOCADO_PKG_PREFIX && make && make install
> Test2: Not it can use binaries installed by Test1
>

Sounds good to me, I'd choose AVOCADO_TEST_ARTIFACT_PREFIX, since you are
after artifacts produced by tests but maybe that's too long.


> * Performance numbers:
> Some tests may produce performance values. Autotest has cool feature
> aka 'key/val' for that. AFAIU avocado suppose to use whiteboard for that.
> But AFAIS now one use it at the moment. Btw  dbench, iozone and ebizzy save
> data in datadir/perf.json instead.
> What is the best way to store perf values?
>

Whiteboard was supposed to be free form, and indeed keyval files are cool,
but as time passes, I'm becoming convinced that using json is better, since
is ubiquitous and standardized as a data exchange format at this point.

If you really need something different than JSON, the whiteboard could be
fine, but for performance data I'm hardly pressed to see why JSON wouldn't
be an appropriate format to store performance data. Of course, we could
have a wider discussion on the subject, if Lukas, Cleber, Amador and Ademar
feel like it's worthwhile.

Cheers!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/avocado-devel/attachments/20160913/ed3b236f/attachment.htm>


More information about the Avocado-devel mailing list