[Avocado-devel] Test assumptions question.

Lukáš Doktor ldoktor at redhat.com
Wed Sep 14 15:37:14 UTC 2016


Dne 14.9.2016 v 09:51 Dmitry Monakhov napsal(a):
> Lucas Meneghel Rodrigues <lookkas at gmail.com> writes:
>
>> On Tue, Sep 13, 2016 at 10:15 AM Dmitry Monakhov <dmonlist at gmail.com> wrote:
>>
>>> Hi,
>>> I try to automate my kernel testing via avocado.
>>> I've faced some general design questions. And want to ask an advice.
>>>
>>> Testing procedure is quite general, and suites for any project.
>>> 1) Download source
>>> 2) configure kernel
>>> 3) Build kernel
>>> 4) Run several semantic checkers
>>> 5-N) Run tests for that kernel (inside qemu, so reboot is not necessary)
>>>      in my case this is xfstests-bld (filesystem test)
>>>
>>> * Questions about test iterations
>>> It is obvious that it is not good idea to place all functionality in
>>> single class. Actually it is reasonable to create following hierarchy
>>> linuxbuild.py: Does: 1,2,3
>>> sparse_check.py: 4
>>> xfstests-blk.py: 5
>>> ....
>>>
>>> But this means tests should be aware about there to find common data.
>>> sparse_check.py needs source of linux-kernel
>>> xfstests-bld.py needs to now where to find bzImage
>>> But each tests has it's own set of directories {workdir,srcdir, etc} and
>>> there is no convenient path where tests can share data.
>>> The only candidate I found is get_tmp_dir from core.data_dir module
>>> In fact avocado-misc-tests/memory/dma_memtest.py already use that
>>> interface. Is this correct way to do?
>>>
>>
>> Yes. To be honest though, the use case where different tests were executed
>> in sequence sharing data was not something we considered at first. So using
>> get_tmp_dir you'll solve your problem, but that breaks the principle that
>> tests should be runnable on their own.
>>
>> If I had a similar problem, I'd create a lib with the download, configure
>> and build procedures, and call them inside your tests (or setup methods),
>> in such a way that, if you only run sparse_check.py, for example, the code
>> checks if there is a source dir already built, and if there's not, it goes
>> and fetches the code and builds it. This way you still have your tests to
>> cooperate, still allow them to be run separately.
> Hm. It is appeared that even test from same class can not share one
> state. For example let's consider avocado's release testcase.
> We have git test which does many micro testcases
> An obvious way to implement it as single class which may test_X functions
>
> class AvocadoRelease(Test):
>
>     def setUp(self):
>         self.log.info("do setUp: install requirements, fetch source")
Well nothing prevents you from using:

     artifacts = os.environ["AVOCADO_PKG_PREFIX"]
     if os.path.exists(artifacts):
         return
     os.mkdir(artifacts)
     self.log.info("do setUp: install requirements, fetch source")

As for the shared location, you could use a workdir (eg. jenkins 
workdir) and pass it via `--mux-inject` or as suggested via os environment.

>
>     def test_a(self):
>         self.log.info("do test_a: inspekt lint")
>
>     def test_b(self):
> 	self.log.info("do test_b: inspekt style")
>
>     def tearDown(self):
>         self.log.info("do tearDown")
> My assumptions was that test sequence will be:
> do setUp
> do test_a: inspekt lint
> do test_b: inspekt style
> do tearDown
> But it is appeared that each testcase is wrapped with setUp()/teerDown()
> ####
> START 1-simpletest.py:AvocadoReliase.test_a
> do setUp: install requirements, fetch source
> do test_a: inspekt lint
> do tearDown
> PASS 1-simpletest.py:AvocadoReliase.test_a
> START 2-simpletest.py:AvocadoReliase.test_b
> do setUp: install requirements, fetch source
> do test_b: inspekt style
> do tearDown
> PASS 2-simpletest.py:AvocadoReliase.test_b
> ####
> This is not obvious. And makes it hard to divide test to
> fine-grained testcases because setUp/teerDown() for each test may
> be too intrusive. What is convenient way implement this scenario?

Speaking of setUp, note that tearDown is not called when the setUp fails.

>>> BTW: If this is the case let's make some default install-prefix for
>>> tests, where tests can install their binaries
>>> Example:
>>> job_init()
>>> export  AVOCADO_PKG_PREFIX="job.get_tmp_dir() + 'pkg-prefix'
>>> export  PATH=$PATH:$AVOCADO_PKG_PREFIX/bin/:$AVOCADO_PKG_PREFIX/sbin
>>>
>>> Test1: ./configure --prefix=$AVOCADO_PKG_PREFIX && make && make install
>>> Test2: Not it can use binaries installed by Test1
>>>
>>
>> Sounds good to me, I'd choose AVOCADO_TEST_ARTIFACT_PREFIX, since you are
>> after artifacts produced by tests but maybe that's too long.
> Ok.Coll I'll send a patch.
>>
>>
>>> * Performance numbers:
>>> Some tests may produce performance values. Autotest has cool feature
>>> aka 'key/val' for that. AFAIU avocado suppose to use whiteboard for that.
>>> But AFAIS now one use it at the moment. Btw  dbench, iozone and ebizzy save
>>> data in datadir/perf.json instead.
>>> What is the best way to store perf values?
>>>
>>
>> Whiteboard was supposed to be free form, and indeed keyval files are cool,
>> but as time passes, I'm becoming convinced that using json is better, since
>> is ubiquitous and standardized as a data exchange format at this point.
>>
>> If you really need something different than JSON, the whiteboard could be
>> fine, but for performance data I'm hardly pressed to see why JSON wouldn't
>> be an appropriate format to store performance data. Of course, we could
>> have a wider discussion on the subject, if Lukas, Cleber, Amador and Ademar
>> feel like it's worthwhile.
>>
>> Cheers!

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 502 bytes
Desc: OpenPGP digital signature
URL: <http://listman.redhat.com/archives/avocado-devel/attachments/20160914/bc3edf0c/attachment.sig>


More information about the Avocado-devel mailing list