gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] Proposal: Make GNUnet Great Again?


From: Schanzenbach, Martin
Subject: Re: [GNUnet-developers] Proposal: Make GNUnet Great Again?
Date: Sat, 9 Feb 2019 17:04:35 +0100

I have some inline comments as well below, but let us bring this discussion 
down to a more practical consensus maybe.
I think we are arguing too much in the extremes and that is not helpful. I am 
not saying we should compartmentalise
GNUnet into the tiniest possible components.
It's just that I think it is becoming a bit bloated.

That being said, _most_ of what is in GNUnet today is perfectly fine in a 
single repo and package.
For now, at least let us not add another one (gtk) as well?

Then, we remain with

- reclaim (+the things reclaim needs wrt libraries)
- conversation (+X)
- secureshare (+X)
- fs (+X)

as components/services on my personal "list".
I suggest that _if_ I find the time, I could extract reclaim into a separate 
repo as soon as we have a CI and I can
test how it works and we can learn from the experience.
Then, we can discuss if we want to do the same with other components, one at a 
time, if there is consensus and a person that
would be willing to take ownership (I am pretty sure we talked about this 
concept last summer as well).

> On 9. Feb 2019, at 13:38, Christian Grothoff <address@hidden> wrote:
> 
> Signed PGP part
> On 2/9/19 1:06 PM, Hartmut Goebel wrote:
>> Assume we have a huge repo:
>> 
>>  * The total number of build-triggers is the same as for a smaller
>>    repos (assuming each push is a trigger).
>>  * The build-time of each repo (much) longer, since the whole repo will
>>    be build from scratch. Since there are no files from a last build,
>>    everything has to be build.
> 
> Is that true? autotools can re-build based on timestamps that have
> changed for like forever. With Buildbot, I can certainly do incremental
> builds, I am not forced to do a "make distclean" every time. Similarly,
> build triggers do not have to be as coarse as "any push", I could
> specify a push to directory X triggers tests (make check) in directories
> X, Y and Z, or not?
> 
> If the CI requires always building every repo from scratch and always
> running all tests, maybe the CI is to blame? IIRC with Buildbot, you do
> have more control than "always redo everything".

The tests and build should not be stateful. I do not see any advantage in 
having stateful builds
_especially_ in C-based and autotools-based projects are existing linker 
artifacts may obfuscate FTBFS situations on clean environments.
This has happened to me before and is one of the reasons I keep bitching about 
CI (and with that I mean: automatic build and test in clean environments).

> 
>>  * Developers get the CI results later sitting around waiting for the
>>    result. (One of my projects takes 1:30 to finish CI, which is nerving.)
> 
> Agreed, but faster tests, parallel tests and selective tests based on
> dependencies (which can theoretically even be decided automatically)
> seem to me like the smarter solution here.
> 
>>  * When packaging (.deb,.rpm, guix), huge repos/archives are much more
>>    nerving to package: Build-time is long, test-time is long, and if
>>    anything fails or new patches are required, you'll start again.
>>    (Some of the KDE packages take 15 Minutes to build. Iterating this
>>    really woes!)
> 
> I'm not convinced that one big build is really much worse here than 50
> small ones.

Then you need to refute how you justify building a libgnunetutil and dht and 
testing again because you updated the gtk UI (hypothetically, if it were in the 
repo).

> 
>>    (When configuring gitlab-CI some of the issues could be solved, see
>>    
>> https://docs.gitlab.com/ce/ci/yaml/README.html#onlychanges-and-exceptchanges)
>> 
>> Also from a developers perspective, a huge repo has some drawbacks: E.g.
>> when switching branches or bi-secting, git will touch a lot of files
>> which all need to be rebuild, which is taking time.
> 
> Granted, but your Git-driven bisection totally becomes much less useful
> if you first have to identify which of the 50 repos really is the cause
> of the regression.  So here you are trading touching files for the power
> to more easily identify non-obvious sources of bugs.

No, because even if your repo is triggered to build+test because of a change in 
the code of another repo, then you wouldn't even start bisecting.
Actually, you can't because your git repo didn't see any commits! You simply 
have a failed test instead of X commits which have nothing to do with your 
codebase but you now may have to disect (which then again is pointless but you 
do not know that in advance).

> 
> 
> 
> 

Attachment: signature.asc
Description: Message signed with OpenPGP


reply via email to

[Prev in Thread] Current Thread [Next in Thread]