qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 0/1] ci: Speed up container stage


From: Alex Bennée
Subject: Re: [RFC PATCH 0/1] ci: Speed up container stage
Date: Thu, 23 Feb 2023 15:43:37 +0000
User-agent: mu4e 1.9.21; emacs 29.0.60

Daniel P. Berrangé <berrange@redhat.com> writes:

> On Thu, Feb 23, 2023 at 11:21:53AM -0300, Fabiano Rosas wrote:
>> I'm not sure if this was discussed previously, but I noticed we're not
>> pulling the images we push to the registry at every pipeline run.
>> 
>> I would expect we don't actually need to rebuild container images at
>> _every_ pipeline run, so I propose we add a "docker pull" to the
>> container templates. We already have that for the docker-edk2|opensbi
>> images.
>> 
>> Some containers can take a long time to build (14 mins) and pulling
>> the image first without building can cut the time to about 3
>> mins. With this we can save almost 2h of cumulative CI time per
>> pipeline run:
>
> The docker.py script that we're invoking is already pulling the
> image itself eg to pick a random recent job:
>
>   https://gitlab.com/qemu-project/qemu/-/jobs/3806090058
>
> We can see
>
>   $ ./tests/docker/docker.py --engine docker build -t "qemu/$NAME" -f
> "tests/docker/dockerfiles/$NAME.docker" -r
> $CI_REGISTRY/qemu-project/qemu 03:54
>   Using default tag: latest
>   latest: Pulling from qemu-project/qemu/qemu/debian-arm64-cross
>   bb263680fed1: Pulling fs layer
>   ...snip...
>
> none the less it still went ahead and rebuilt the image from scratch
> so something is going wrong here. I don't know why your change adding
> an extra 'docker pull' would have any effect, given we're already
> pulling, so I wonder if that's just coincidental apparent change
> due to the initial state of your fork's container registery.
>
> Whenever I look at this I end up wishing out docker.py didn't exist
> and that we could just directly do
>
>   - docker pull "$TAG"
>   - docker build --cache-from "$TAG" --tag "$TAG" -f 
> "tests/docker/$NAME.docker"
>
> as that sould be sufficient to build the image with caching.

I think we should be ready to do that now as we have flattened all our
dockerfiles. The only other thing that docker.py does is nicely add a
final step for the current user so you can ensure all files generated in
docker cross compile images are still readable on the host.

>> We would need to devise a mechanism (not included here) to force the
>> re-build of the container images when needed, perhaps an environment
>> variable or even a whole new "container build" stage before the
>> "container" stage.
>> 
>> What do you think?
>
> We definitely want the rebuild to be cached. So whatever is
> broken in that regard needs fixing, as this used to work AFAIK.
>
>
> Ideally we would skip the container stage entirely for any
> pull request that did NOT include changes to the dockerfile.

That would be ideal.

> The problem is that the way we're using gitlab doesn't let
> that work well. We need to setup rules based on filepath.
> Such rules are totally unreliable for push events in
> practice, because they only evaluate the delta between what
> you just pushed and what was already available on the server.
> This does not match the content of the pull request, it might
> be just a subset.
>
> If we had subsystem maintainers opening a merge request for
> their submission, then we could reliably write rules based
> on what files are changed by the pull request, and entirely
> skip the containers stage most of the time, which would be
> an even bigger saving.

Our first tentative steps away from an email process?

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



reply via email to

[Prev in Thread] Current Thread [Next in Thread]