From mhradile at redhat.com Wed Oct 26 08:41:51 2016 From: mhradile at redhat.com (Miroslav Hradilek) Date: Wed, 26 Oct 2016 10:41:51 +0200 Subject: [sos-devel] Proposal: Synergy of upstream and downstream testing of sos Message-ID: Hello, I was thinking about this for a long time and I think that the following could work. Problem in downstream testing: For downstream, we develop tests for a) specific issues, b) whole plugins, c) basic functionality. Unlike sos internal test suite these are blackbox integration tests written in bash as opposed to python whitebox tests checking the functions themselves. Sometimes the tests are run on real environment but very often the files and commands to be collected are mocked. To make our lifes easier there are libraries of setup functions, mock functions, assertions and so on. To ensure these tests can be reused, a lot of branching needs to be done within code depending on the environment but mostly on version of the sosreport and patches applied downstream. Code branching is very inefficient and takes a lot of effort. When developing the tests library is developed along with it and needs to be maintained too. Upstream gets only bug reports from these efforts. Problems I suggest to solve: 1. Avoid downstream branching and test library development. 2. Extend upstream test coverage and test library by downstream testers. 3. Write tests library functions and tests so that they can be run in mocked environment as well as in real environment with the flick of a switch. Extra work I suggest to put on our shoulders: 1. Developers and plugin contributors forced to modify tests and libraries to ensure they work with their commit. 2. Downstream testers extending the upstream test suite and libraries to later use it downstream. Proposal I make: * Let's chose test framework + assertion library, develop our library and fixtures upstream. Develop at least part of the integration test suite [ b) and c) ] upstream and use upstream library for a) downstream. * Write tests and mock functions so that they use real file chunks and, by change of the environment, setups can be disabled therefore collecting and asserting real files and commands. * Require integration test suite to pass in order to accept commit. Encourage submitting integration tests with new plugins and plugins functionality. What do you think? -- Miroslav Hradilek Quality Assurance Engineer Base OS Quality Engineering Red Hat Czech, s. r. o. Purkynova 99 612 45 Brno, Czech Republic From bmr at redhat.com Wed Oct 26 11:12:11 2016 From: bmr at redhat.com (Bryn M. Reeves) Date: Wed, 26 Oct 2016 12:12:11 +0100 Subject: [sos-devel] Proposal: Synergy of upstream and downstream testing of sos In-Reply-To: References: Message-ID: <20161026111210.GB9882@dhcp-24-182.fab.redhat.com> On Wed, Oct 26, 2016 at 10:41:51AM +0200, Miroslav Hradilek wrote: > For downstream, we develop tests for a) specific issues, b) whole plugins, > c) basic functionality. Unlike sos internal test suite these are blackbox > integration tests written in bash as opposed to python whitebox tests > checking the functions themselves. Sometimes the tests are run on real > environment but very often the files and commands to be collected are > mocked. To make our lifes easier there are libraries of setup functions, > mock functions, assertions and so on. Right: these are integration tests (that work on 'sosreport' as a whole), where the upstream suite is primarily unit testing. One thing to remember here though: there are more downstreams than just the Red Hat / Fedora world today: sos is actively maintained in both Debian and Ubuntu, is used in several hypervisor products (notably IBM's PowerKVM), and has at least some users on SuSE and other distributions. > To ensure these tests can be reused, a lot of branching needs to be done > within code depending on the environment but mostly on version of the > sosreport and patches applied downstream. Code branching is very inefficient > and takes a lot of effort. When developing the tests library is developed > along with it and needs to be maintained too. Upstream gets only bug reports > from these efforts. It might help some of the other readers on the list to give a brief example, or overview, of the downstream testing we do in RHEL, and why maintaining multiple branches becomes painful. > Problems I suggest to solve: > 1. Avoid downstream branching and test library development. > 2. Extend upstream test coverage and test library by downstream testers. > 3. Write tests library functions and tests so that they can be run in > mocked environment as well as in real environment with the flick of a > switch. I think this is an excellent goal. We tried a few years ago to get a docker based testing system together, that was trying to address similar needs: - coverage of multiple distributions (policies/environments) - reproducible environments for testing - repeatable tests Some of the ideas were tracked in the following GitHub issue: https://github.com/sosreport/sos/issues/335 > Extra work I suggest to put on our shoulders: > 1. Developers and plugin contributors forced to modify tests and libraries > to ensure they work with their commit. Ack. This is something we've gotten a little better at for API changes - asking developers to also amend the unit test suite - but extending this to also include integration testing would be really useful. > 2. Downstream testers extending the upstream test suite and libraries to > later use it downstream. > > Proposal I make: > * Let's chose test framework + assertion library, develop our library and > fixtures upstream. Develop at least part of the integration test suite [ b) > and c) ] upstream and use upstream library for a) downstream. > * Write tests and mock functions so that they use real file chunks and, by > change of the environment, setups can be disabled therefore collecting and > asserting real files and commands. > * Require integration test suite to pass in order to accept commit. > Encourage submitting integration tests with new plugins and plugins > functionality. I'm curious what it would look like, and how we would manage to support the range of different downstreams from a single upstream testing base, but I think it's definitely worth more investigation. As you're more experienced with this kind of testing, and the available tools than most of the team, would you be OK to write up a more detailed proposal, or something that demos how some of this would work? Regards, Bryn.