qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/5] QEMU Gating CI


From: Cleber Rosa
Subject: Re: [PATCH 0/5] QEMU Gating CI
Date: Thu, 23 Apr 2020 13:36:48 -0400 (EDT)


----- Original Message -----
> From: "Daniel P. Berrangé" <address@hidden>
> To: "Cleber Rosa" <address@hidden>
> Cc: "Peter Maydell" <address@hidden>, "Fam Zheng" <address@hidden>, "Thomas 
> Huth" <address@hidden>,
> "Beraldo Leal" <address@hidden>, "Erik Skultety" <address@hidden>, "Philippe 
> Mathieu-Daudé"
> <address@hidden>, "Wainer Moschetta" <address@hidden>, "Markus Armbruster" 
> <address@hidden>, "Wainer dos
> Santos Moschetta" <address@hidden>, "QEMU Developers" <address@hidden>, 
> "Willian Rampazzo"
> <address@hidden>, "Alex Bennée" <address@hidden>, "Eduardo Habkost" 
> <address@hidden>
> Sent: Thursday, April 23, 2020 1:13:22 PM
> Subject: Re: [PATCH 0/5] QEMU Gating CI
> 
> On Thu, Apr 23, 2020 at 01:04:13PM -0400, Cleber Rosa wrote:
> > 
> > 
> > ----- Original Message -----
> > > From: "Peter Maydell" <address@hidden>
> > > To: "Markus Armbruster" <address@hidden>
> > > Cc: "Fam Zheng" <address@hidden>, "Thomas Huth" <address@hidden>,
> > > "Beraldo Leal" <address@hidden>, "Erik
> > > Skultety" <address@hidden>, "Alex Bennée" <address@hidden>,
> > > "Wainer Moschetta" <address@hidden>,
> > > "QEMU Developers" <address@hidden>, "Wainer dos Santos Moschetta"
> > > <address@hidden>, "Willian Rampazzo"
> > > <address@hidden>, "Cleber Rosa" <address@hidden>, "Philippe
> > > Mathieu-Daudé" <address@hidden>, "Eduardo
> > > Habkost" <address@hidden>
> > > Sent: Tuesday, April 21, 2020 8:53:49 AM
> > > Subject: Re: [PATCH 0/5] QEMU Gating CI
> > > 
> > > On Thu, 19 Mar 2020 at 16:33, Markus Armbruster <address@hidden>
> > > wrote:
> > > > Peter Maydell <address@hidden> writes:
> > > > > I think we should start by getting the gitlab setup working
> > > > > for the basic "x86 configs" first. Then we can try adding
> > > > > a runner for s390 (that one's logistically easiest because
> > > > > it is a project machine, not one owned by me personally or
> > > > > by Linaro) once the basic framework is working, and expand
> > > > > from there.
> > > >
> > > > Makes sense to me.
> > > >
> > > > Next steps to get this off the ground:
> > > >
> > > > * Red Hat provides runner(s) for x86 stuff we care about.
> > > >
> > > > * If that doesn't cover 'basic "x86 configs" in your judgement, we
> > > >   fill the gaps as described below under "Expand from there".
> > > >
> > > > * Add an s390 runner using the project machine you mentioned.
> > > >
> > > > * Expand from there: identify the remaining gaps, map them to people /
> > > >   organizations interested in them, and solicit contributions from
> > > >   these
> > > >   guys.
> > > >
> > > > A note on contributions: we need both hardware and people.  By people I
> > > > mean maintainers for the infrastructure, the tools and all the runners.
> > > > Cleber & team are willing to serve for the infrastructure, the tools
> > > > and
> > > > the Red Hat runners.
> > > 
> > > So, with 5.0 nearly out the door it seems like a good time to check
> > > in on this thread again to ask where we are progress-wise with this.
> > > My impression is that this patchset provides most of the scripting
> > > and config side of the first step, so what we need is for RH to provide
> > > an x86 runner machine and tell the gitlab CI it exists. I appreciate
> > > that the whole coronavirus and working-from-home situation will have
> > > upended everybody's plans, especially when actual hardware might
> > > be involved, but how's it going ?
> > > 
> > 
> > Hi Peter,
> > 
> > You hit the nail in the head here.  We were affected indeed with our
> > ability
> > to move some machines from one lab to another (across the country), but
> > we're
> > actively working on it.
> 
> For x86, do we really need to be using custom runners ?
> 

Hi Daniel,

We're already using the shared x86 runners, but with a different goal.  The
goal of the "Gating CI" is indeed to expand on non-x86 environments.  We're
in a "chicken and egg" kind of situation, because we'd like to prove that
GitLab CI will allow QEMU to expand to very different runners and jobs, while
not really having all that hardware setup and publicly available at this time.

My experiments were really around that point, I mean, confirming that we can 
grow
the number of architectures/runners/jobs/configurations to provide a coverage
equal or greater to what Peter already does.

> With GitLab if someone forks the repo to their personal namespace, they
> cannot use any custom runners setup by the origin project. So if we use
> custom runners for x86, people forking won't be able to run the GitLab
> CI jobs.
> 

They will continue to be able use the jobs and runners already defined in
the .gitlab-ci.yml file.  This work will only affect people pushing to the/a
"staging" branch.

> As a sub-system maintainer I wouldn't like this, because I ideally want
> to be able to run the same jobs on my staging tree, that Peter will run
> at merge time for the PULL request I send.
> 

If you're looking for symmetry between any PR and "merge time" jobs, the
only solution is to allow any PR to access all the diverse set of non-shared
machines we're hoping to have in the near future.  This may be something
we'll get to, but I doubt we can tackle it in the near future now.

> Thus my strong preference would be to use the GitLab runners in every
> scenario where they are viable to use. Only use custom runners in the
> cases where GitLab runners are clearly inadequate for our needs.
> 
> Based on what we've setup in GitLab for libvirt,  the shared runners
> they have work fine for x86. Just need the environments you are testing
> to be provided as Docker containers (you can actually build and cache
> the container images during your CI job too).  IOW, any Linux distro
> build and test jobs should be able to use shared runners on x86, and
> likewise mingw builds. Custom runners should only be needed if the
> jobs need todo *BSD / macOS builds, and/or have access to specific
> hardware devices for some reason.
> 

We've discussed this before at the RFC time, wrt how the goal is for a wider
community to provide a wider range of jobs.  Even for x86, one may want
to require their jobs to run on a given accelerator, such as KVM, so we
need to consider that from the very beginning.

I don't see a problem with converging jobs with are being run on custom
runners back into shared runners as much as possible.  At the RFC discussion,
I actually pointed out how the build phase could be running essentially
on pre-built containers (on shared runners), but the test phase, say testing
KVM, should not be bound to that. 

So in essence, right now, moving everything to containers would invalidate the
exercise of being able to care for those custom architectures/builds/jobs we'll
need in the near future.  And that's really the whole point here.

Cheers,
- Cleber.

> 
> Regards,
> Daniel
> --
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange
> |:|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com
> |:|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange
> |:|
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]