[Freeipa-devel] [PROPOSAL] FreeIPA Test Plan Workflow

Martin Kosek mkosek at redhat.com
Thu Mar 19 11:33:12 UTC 2015


On 03/19/2015 11:45 AM, Petr Spacek wrote:
> On 19.3.2015 10:11, Martin Kosek wrote:
>> On 03/19/2015 09:25 AM, Petr Spacek wrote:
>>> Hello,
>>>
>>> I do not much to add to the process itself. After first reading it seems
>>> pretty heavyweight but let's try it, it can be refined at any time :-)
>>
>> Right, but then we would need to migrate the data about test completion and so
>> on - which is more work. So it is much better to define some working now, than
>> to change it couple months later.
>>
>> We were already trying to invent something as much lightweight as possible,
>> this was the minimum new fields we come for to be able to track the test
>> coverage and plans. If you have another proposal how to track it better, I
>> would love to hear it, really :-)
> 
> Sure. For me the main question is when *designing of tests* should start and
> how it is synchronized with feature design. Is it done in parallel? Or
> sequentially? When the feedback from test designers flows back? Isn't it too late?
> 
> Let's discuss ticket workflow like this:
> new -> design functionality&tests -> write code&tests -> test run -> closed
> 
> IMHO we should have tests *designed* before we start to implement the final
> version of the functionality. It may be too late to find out that interface
> design is flawed (e.g. from user's point of view) when the feature is fully
> implemented and test phase is reached.
> 
> Designing/writing tests early could discover things like poor interface design
> sooner, when it is still easy to change interfaces. Currently we have 'design'
> reviews before the implementation starts but actually designing tests at the
> same time would attract more eyes/brains to the feature design phase. We may
> call it 'first usability review' if we wish :-)
> 
> In my mind, test designers should be first feature users (even virtually) so
> the early feedback is crucial.
> 
> Note that this approach does not preclude experimental/quick&dirty prototyping
> as part of the design phase but it has to be clear that prototype might (and
> should!) be thrown away if the first idea wasn't the best one.

Yes! This is exactly why this QE team was created - to be able to test as early
as possible, review designs with QE eyes as early as possible.

> If this is too radical:
> 
> To me it seems kind on unnatural to separate testing from overall bug state.
> Equivalent of ON_QA state in Bugzilla seems more natural to me as it is kind
> of weird to claim that ticket is closed/finished before full testing cycle is
> finished.
> 
> I.e. the ticket could have states like:
> new -> assigned -> qe -> closed
> "qe" state can be easily skipped if no testing is (deemed to be) necessary.

This is an alternate approach yes. Trac has workflow plugin that should be able
to add it. But wouldn't that workflow actually support the classic waterfall
approach and not the more agile approach with testing more or less in parallel
with the work on the code?

The point is that a RFE may be still in development and also in QE state in
parallel - thus the field.

> 
> Then there is the question if we actually need to separate field for QE state
> and Test case field. Test case could behave in the same way as Bugzilla link
> field:
> - empty field - undecided
> - 0 (or string "not necessary" or something) - test case is deemed unnecessary
> - non-zero link - apparently, a test case exists
> 
> It would be more consistent with what we have for Bugzilla links.

The metadata we come up should be able to supply at least following queries:
- which tickets (RFEs/bugs) are covered with tests in a specific milestone,
what are the test cases
- who, from QE team, is working on which tickets
- list of tickets where we want the tests and which are for grabs by QE engineer

I am not sure if this can be covered just with the extra QE phase and Test Case
link.




More information about the Freeipa-devel mailing list