[libvirt PATCH 4/4] gitlab-ci: Introduce a new test 'integration' pipeline stage

Erik Skultety eskultet at redhat.com
Mon Feb 14 14:53:29 UTC 2022


On Thu, Feb 10, 2022 at 09:40:45AM +0000, Daniel P. Berrangé wrote:
> On Mon, Jan 31, 2022 at 07:01:01PM +0100, Erik Skultety wrote:
> > Create an integration child pipeline in this stage which will trigger a
> > multi-project CI build of Perl bindings which are required by the TCK
> > test suite.
> > In general, this stage will install all the necessary build artifacts
> > and configure logging on the worker node prior to executing the actual
> > test suite. In case of a failure, libvirt and Avocado logs are saved
> > and published as artifacts.
> > 
> > Signed-off-by: Erik Skultety <eskultet at redhat.com>
> > ---
> >  .gitlab-ci-integration.yml | 116 +++++++++++++++++++++++++++++++++++++
> 

...

> > diff --git a/.gitlab-ci-integration.yml b/.gitlab-ci-integration.yml
> > new file mode 100644
> > index 0000000000..cabefc5166
> > --- /dev/null
> > +++ b/.gitlab-ci-integration.yml
> > @@ -0,0 +1,116 @@
> > +stages:
> > +  - bindings
> > +  - integration
> > +
> > +.tests:
> > +  stage: integration
> > +  before_script:
> > +    - mkdir "$SCRATCH_DIR"
> > +    - sudo dnf install -y libvirt-rpms/* libvirt-perl-rpms/*
> > +    - sudo pip3 install --prefix=/usr avocado-framework
> 
> I'd prefer it if we can just 'dnf install avocado' on Fedora at least,
> so that we validate that if someone is running avocado locally we're
> compatible with what's packaged.

That's my desire as well, but that is currently not possible because:
a) the Avocado shipped over the standard package manager channel is too old for
   ^this, to be precise it would mark every skipped job as failed due to a
   a missing TAP parser fix (we need avocado-91+)
b) the 3rd party RPM repo currently returns 404 on my Fedora-34, nevertheless
   last time I checked, even this repo shipped avocado-90 only

Cleber, how realistic is it for the Avocado project to build RPMs with every
release?

...

> > +
> > +
> > +libvirt-perl-bindings:
> > +  stage: bindings
> > +  trigger:
> > +    project: eskultety/libvirt-perl
> > +    branch: multi-project-ci
> > +    strategy: depend
> 
> IIUC, what this does is to spawn a pipeline in the
> 'libvirt-perl' project on the 'multi-project-ci'
> branch.  Normally this is asyncronous, but
> because of the 'strategy: depend' that causes us
> to block until this async pipeline is complete.

Yes, that's correct.

> 
> > +centos-stream-8-tests:
> > +  extends: .tests
> > +  needs:
> > +    - libvirt-perl-bindings
> 
> So this triggers the perl bindings pipeline build

No, the trigger is the job 'libvirt-perl-bindings' above, but we need to wait
for the trigger as well, otherwise the integration job would not wait until the
new RPMs are available and instead it would just download the currently latest
ones (which will be available even if expired!)
I don't know whether this is a bug, but I originally didn't do it and I noticed
the integration jobs never waited for the bindings to be built and instead
downloaded the latest available copy.

> 
> > +    - pipeline: $PARENT_PIPELINE_ID
> > +      job: x86_64-centos-stream-8
> 
> This is making us wait for the centos-tream-8
> job in the normal CI pipeline.
> 
> Should we need 'artifacts: true' here too ?

I guess we can be explicit, but as long as the dependencies come from the same
pipeline, artifacts are automatically available (that is by design)

> 
> IIRC, artifacts true was the default, but it
> feels sane to make it explicit, especially
> since you were explicit for the libvirt-perl
> job below

I can try whether gitlab is happy with that too, but conceptually, sure, why
not.

> 
> > +    - project: eskultety/libvirt-perl
> > +      job: x86_64-centos-stream-8
> > +      ref: multi-project-ci
> > +      artifacts: true
> 
> And this making us wait for the cento-stream-8
> job in te lbivirt-perl pipeline that we spawned
> in the 'libvirt-perl-bindings'. IIUC there should
> be no waiting needed since 'libvirt-perl-bindings'
> was blocking on completion of the trigger pipeline,
> so this effectivectly justpulls in the artifacts from
> the already finishd job.

Yes, exactly, this would only download the artifacts, but the syntax hints that
a job is spawned :/ . However, as I wrote above, without waiting for the
trigger job, ^this hunk would just pull the latest available artifacts instead
of waiting for the ones currently being built

> 
> I presume your eskultety/libvirt-perl  repo
> multi-project-ci branch has a cyhange that causes
> the perl-Sys-Virt RPMs to be publish as artifacts,
> similar to your change in the previous patch in
> this series.

Yes, in fact, I just submitted the following MRs:
https://gitlab.com/libvirt/libvirt-perl/-/merge_requests/54
https://gitlab.com/libvirt/libvirt-perl/-/merge_requests/55

with the latter enabling what we need here.


> 
> > +  variables:
> > +    DISTRO: centos-stream-8
> > +  tags:
> > +    - centos-stream-vm
> 
> This means none of these jobs will run by default, unless
> you're registered a custom runner with this tag. So no
> forks will get these jobs, it'll be post-merge only.
> 
> That's fine for now. When we switch to merge requests
> we will gain ability to trigger this even for forks without
> runners as merge request jobs will attach to the primary
> project runners, which is nice.

Yes. Custom runners are always private to the project, so forks never have
access to those unless we manually run the MR in context of the libvirt project.

> 
> > diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
> > index 4bcaf22ce2..453472c8be 100644
> > --- a/.gitlab-ci.yml
> > +++ b/.gitlab-ci.yml
> > @@ -4,6 +4,7 @@ variables:
> >  stages:
> >    - containers
> >    - builds
> > +  - test
> >    - sanity_checks
> 
> Perhaps just put this in the 'sanity_checks' stage ? Since all
> the real jobs are in the child pipeline, I don't think we need
> an extra stage here.

Not my preference, but I can have it either way, a new stage can be added at
any time if needed.

> 
> >  .script_variables: &script_variables |
> > @@ -128,3 +129,16 @@ coverity:
> >      - curl https://scan.coverity.com/builds?project=$COVERITY_SCAN_PROJECT_NAME --form token=$COVERITY_SCAN_TOKEN --form email=$GITLAB_USER_EMAIL --form file=@cov-int.tar.gz --form version="$(git describe --tags)" --form description="$(git describe --tags) / $CI_COMMIT_TITLE / $CI_COMMIT_REF_NAME:$CI_PIPELINE_ID"
> >    rules:
> >      - if: "$CI_PIPELINE_SOURCE == 'schedule' && $COVERITY_SCAN_PROJECT_NAME && $COVERITY_SCAN_TOKEN"
> > +
> > +integration:
> > +  stage: test
> > +  needs:
> > +    - x86_64-centos-stream-8
> > +    - x86_64-centos-stream-9
> > +    - x86_64-fedora-34
> > +    - x86_64-fedora-35
> 
> Ok, so any job in the child pipeline with a dependancy on a
> job in this pipeline needs to have the dependancy repeated
> here.

I don't think you need it really, but we have so many container build running
that waiting for ALL of them seemed pointless to me, so I added explicitly the
ones we need so that the integration stage can start ASAP.

Thanks,
Erik




More information about the libvir-list mailing list