emacs-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#32548: closed (Cuirass: Performance monitoring)


From: GNU bug Tracking System
Subject: bug#32548: closed (Cuirass: Performance monitoring)
Date: Thu, 17 Sep 2020 10:09:02 +0000

Your message dated Thu, 17 Sep 2020 12:07:49 +0200
with message-id <877dss90ne.fsf@gnu.org>
and subject line Re: bug#32548: Cuirass: Performance monitoring
has caused the debbugs.gnu.org bug report #32548,
regarding Cuirass: Performance monitoring
to be marked as done.

(If you believe you have received this mail in error, please contact
help-debbugs@gnu.org.)


-- 
32548: http://debbugs.gnu.org/cgi/bugreport.cgi?bug=32548
GNU Bug Tracking System
Contact help-debbugs@gnu.org with problems
--- Begin Message --- Subject: Cuirass: Performance monitoring Date: Tue, 28 Aug 2018 00:33:30 +0200 User-agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux)
As discussed earlier today on IRC with Clément, we could add performance
monitoring capabilities to Cuirass.  Interesting metrics would be:

  • time of push to time of evaluation completion;

  • time of evaluation completion to time of build completion.

We could visualize that per job over time.  Perhaps these are also stats
that ‘guix weather’ could display.

Ludo’.



--- End Message ---
--- Begin Message --- Subject: Re: bug#32548: Cuirass: Performance monitoring Date: Thu, 17 Sep 2020 12:07:49 +0200 User-agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)
Hey Ludo,

> As discussed on IRC, builds per day should be compared to new
> derivations per day.  For example, if on a day there’s 100 new
> derivations and we only manage to build 10 of them, we have a problem.

I added this line, and they sadly do not overlap :(

> 2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
> (version-1.1.0).
> 2020-09-14T21:16:21 Failed to compute metric 
> average-10-last-eval-duration-per-spec (wip-desktop).
> 2020-09-14T21:16:21 Failed to compute metric 
> average-100-last-eval-duration-per-spec (wip-desktop).
> 2020-09-14T21:16:21 Failed to compute metric average-eval-duration-per-spec 
> (wip-desktop).
>
> Perhaps it can’t compute an average yet for these jobsets?

Yes as soon as those evaluations will be repaired, we should be able to
compute those metrics. I chose to keep the error messages as a
remainder.

I added various other metrics and updated the "/metrics" page. Once we
have a better view, we should think of adding thresholds on those
metrics.

Closing this one!

Thanks,

Mathieu

-- 
https://othacehe.org


--- End Message ---

reply via email to

[Prev in Thread] Current Thread [Next in Thread]