[katello-devel] Travis Testing Pull Requests

Jeff Weiss jweiss at redhat.com
Mon Oct 29 12:18:16 UTC 2012



----- Original Message -----
> From: "Tomas Strachota" <tstrachota at redhat.com>
> To: "Eric Helms" <ehelms at redhat.com>
> Cc: katello-devel at redhat.com
> Sent: Monday, October 29, 2012 5:42:52 AM
> Subject: Re: [katello-devel] Travis Testing Pull Requests
> 
> On 10/26/2012 05:20 PM, Eric Helms wrote:
> > Howdy All,
> >
> > You may notice from today forward that as you open pull requests
> > and
> > make commits to them there will be a status indicator about testing
> > on
> > Travis.  As of https://github.com/Katello/katello/pull/925 we are
> > trying
> > out a Travis configuration that will run the following on all pull
> > requests, repository merges, and updates to pull requests.
> >
> > - CLI unittests
> > - Pylint
> > - SCSS compilation
> > - Rspec unittests
> >
> > The configuration is currently setup to fail-fast. In other words,
> > it
> > will not run all 4 all of the time, it attempts to be as quick as
> > possible, such that if one fails, the build is flagged as failed
> > and
> > execution stops.
> >
> > This may require some tuning as we get used to it, so please
> > provide any
> > feedback or thoughts on the process as we begin to use it.
> >
> > - Eric
> >
> 
> It looks great!
> 
> Just one thought against the fail-fast approach:
> I made a pull request with some server side changes and the tests
> around
> it weren't executed at all because of failing cli unit tests. In this
> case the test report was not very useful. Is it possible to always
> run
> all the tests or is it too resource consuming?
> 
> T.
> 

I agree with the fail-fast approach - in my experience, by trying to get a slower but complete result, you just end up with mostly slow incomplete results.  So you might as well just make it fast.

What you're aiming to find out here is "what changes in behavior did my PR cause".  So it's not really about what the test results are, it's the diff between the PR's results, and master's.

Of course it's a whole lot easier to determine that, if master's test runs are kept all green, so that you know any failed test on the PR is important.  The fact that cli tests failed probably means they were failing on master too, which should not be happening, and is part of the problem I think we're trying to solve here.

And then of course there's failures in the testing infrastructure that have nothing to do with katello at all.  So pass == good, but fail == maybe not good.

-Jeff


> 
> >
> > _______________________________________________
> > katello-devel mailing list
> > katello-devel at redhat.com
> > https://www.redhat.com/mailman/listinfo/katello-devel
> >
> 
> _______________________________________________
> katello-devel mailing list
> katello-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/katello-devel
> 




More information about the katello-devel mailing list