libcdio-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Libcdio-devel] [PATCH 2/5] Add extract ISO/UDF example


From: Rocky Bernstein
Subject: Re: [Libcdio-devel] [PATCH 2/5] Add extract ISO/UDF example
Date: Mon, 13 Feb 2012 20:41:47 -0500

I will merge the various pbatard patches into master and write additional
tests as I can. Over time, difference between pbatard and master will be
reduced.

So there is no confusion: merges and commits should preserve the property
that "make check" and "make distcheck" do not fail. If these had failed in
the past, that was a bug and should be brought to my or everyone's
attention.

It is possible over time we may decide that the incompatibility introduced
is large enough to make another branch for people to try out temporarily
for testing.

I don't have a time estimate on this other than before the next release.


On Mon, Feb 13, 2012 at 7:31 AM, Pete Batard <address@hidden> wrote:

> On 2012.02.13 07:33, Rocky Bernstein wrote:
>
>> Although I said in test-driven development, advocates suggest writing
>> tests
>> first, so tests start out broken, by the time one commits code, the tests
>> should work.
>>
>
> Well, for UDF, that was pretty much the case. Test was broken and was
> simply added so that we know when the code fixes have been added that UDF
> has been fixed.
> Also, as a reminder, I am not planning to add more tests, as I really
> don't have time for that. I fixed the ones I found broken, but that's
> really as far as I'm going to go.
>
>
>  How was I or anyone else supposed to know that this was supposed to fail?
>> You said something about tests failing because of symbolic links.
>>
>
> For rockridge. I said nothing about UDF. I also never said the UDF test
> was supposed to work especially since I mentioned that the next series of
> patches would fix testing, and that the core fixes would come after that.
> The implication was that, since fixes were needed for tests, some tests may
> not be working, and even after tests were fixed, some tests might still not
> PASS if they required core changes.
>
>
>  But this
>> failing tests has nothing to do with that. So I spent time with gdb which
>> I
>> probably didn't have to do
>>
>
> I'm afraid you didn't, unless you found that it was also broken in
> -pbatard.
> If it works in -pbatard, then it'll work in mainline and you just have to
> wait for the patch to be submitted (or request it if you want to have it
> before that).
>
>
>  When someone wants me to review patches for integration, one of my first
>> lines of defense is whether things are now broken. The process I use for
>> installing software is the check out or update the code, build it, *test*
>> it, perhaps try it uninstalled and finally install it. If the testing
>> doesn't work, there is a serious consideration on my part as to whether to
>> go forward.
>>
>
> Then we can't go with the plan I proposed, because I'm not planning to
> push the fixes to core, that are likely needed for the UDF test to pass,
> until this series and the next has been integrated...
>
> My assumption was that you would take a couple of days to review and
> integrate patches to mainline, so that we could move to the next set and
> reduce the amount of time where tests in mainline may be broken --though as
> far as I know, only the UDF test was expected to be broken and the cue test
> would have been fixed with the next set => I didn't see as a serious issue
> at all, as long as integration happened in a timely manner.
>

It's more important to me to try to make sure there is no breakage than
this getting done quicker.


>
> Note that even after we fix the tests, MinGW will stay broken for some
> time, if it compiles at all, because fixes to core will be needed for most
> of the tests to PASS.
> Thus, I don't see that as that different from UDF being temporarily broken.
>

It is very common for software to have bugs, as libcdio does. Also, our
test coverage is currently not very good. But there is a difference between
allowing bugs that we haven't caught or haven't written tests for versus
those that we *have* written tests for.

It is apparent that MinGW compilation is broken. So I will change the
configure script to reflect that.


> If we go with your approach, then we should have the MinGW tests also
> working right away, but considering the amount of fixes that are needed to
> get there, I don't think that can fly.
>

If MinGW works only in very specific situations, the configure script
should test for those important aspects and disallow others. We don't allow
all versions of FreeBSD.

You seem to keep suggesting a setting where one has to write a zillion
tests first which cover every aspect imaginable. That's not the case, and
it isn't how this project has been proceeding. We have been adding a few
incremental tests here and there as time goes along.

So there was one test to see if we could extract a file from a UDF image.
You reported a bug when we tried to extract two files in succession, so a
test for that was added. It wasn't a lot of effort.

Without a doubt there are a myriad of other things that should be tested
with regard to libudf --  those can wait. But those two specific tests
should continue to work or be improved.

So a patch for the UDF extraction or merge of the code shouldn't  be done
before the foundation for the extraction/test to work is in place even if
it is contemplated to work some other time in the future. Patch 2 can wait
or, better, adjusted so it is skipped for now.



>
> As long as we're producing our best effort to bring mainline in check with
> a branch that we know has the fixes required, I really fail to see a
> problem. It's only a matter of whether you want to prevent git users of
> libcdio from seeing work-in-progress.
>

One of the good things about git is that you can commit locally as much as
you want. And if the place you commit is Internet accessible you can let
others clone it and even push to it.  Alternatively or additionally one
create patches from it, even if the local commit is not Internet
accessible.

But when one pushes back to the  Savannah libcdio repository (which might
be after a series of local commits)  I'd prefer "make check" and "make
distcheck" work.

It makes for a simple ruile: if is in the public repository it's is
supposed to work. Especially master. People who casually use the repository
are going to have confusion with any other rule.

That said, I can live with the one branch, the pbatard branch playing by
different rules. But this would be an exception,  not the norm.




> If that's the case, then I'd suggest you create a separate integration
> branch, that you can push to mainline wholesale once you're happy that
> everything has been completed to your liking. Doing it this way won't
> require any of us to spend additional time that could easily be spared, as
> I'm really not prepared to do extra when the end result from the
> integration exercise will be exactly the same. Adding stuff, such as
> marking a test "expected fail" only to have it reverted a few of days later
> doesn't strike me as a good time investment, sorry.
>
>
>  Also, I want to encourage people at any time to check out what is in git
>> and use and give feedback if something is wrong. It is not uncommon in my
>> projects for various packagers to release code to the general public based
>> on a point in time that the packager decided to check out the code in git.
>> But if we have code in there that isn't passing tests, how are *they*
>> supposed to know tests are supposed to fail?
>>
>
> They should see that integration is occurring, and if it does at a proper
> pace, they should find out that any test that was failing gets fixed in a
> matter of days. If that's not acceptable to you, then I think a separate
> branch should be created for this exercise, labelled "work in progress" or
> something.
> Also, the UDF test was just introduced. Something that was just introduced
> usually requires adjustments or may not even work at all, if it's
> integrated into bitesize chunks.
>
>
>  The proper way to mark a test as "not quite done" is to mark it skippable.
>> In this particular project which uses the built-in autotools mechansism,
>> the test should exit with with return code 77. Additionally it would be
>> nice to print a message to stderr giving a reason why the test is skipped.
>> For example: not finished yet, or not applicable for this OS or
>> architecture.
>>
>
> I'm trying to minimize the time I should spend, and you were the one who
> introduced the UDF test. It may sound selfish, but I see this test as your
> responsibility rather than mine. If you are unhappy with the test, then I
> vote for removing altogether, since it's be the option that requires the
> least amount of time on my end.
>
>
>  My plan is to add the tests so that once we fix things, we can get some
>>> validation.
>>>
>>
>> Again backwards from the way I do development. The tests *are* my
>> first-line of validation. And especially when, as is going on here, there
>> are massive changes.  The tests are what gives me confidence that things
>> haven't gotten worse. Getting user-acceptance is absolutely important, but
>> I have found that fewer people are willing to try code in projects when
>> the
>> tests (regularly) fail.
>>
>
> This approach would require me devoting more time than I planned. If we
> integrate in a timely manner, the concern above should not be justified.
> But if you think it is, then please create a separate branch for this
> integration exercise.
>
>
>  And just because there are branches doesn't necessarily mean it's okay for
>> tests to fail in the branches.
>>
>
> Well, the -pbatard branch was only supposed to concern itself with
> extraction and not with anything else. I explicitly stated, when I started
> with libcdio and you created the -pbatard branch, that I would disable
> almost everything not related to image extraction, and I also went on to
> create a specific autogen-pbatard.sh as a result.
> You agreed that it was reasonable.


What I meant was reasonable was to have a branch which might focus on this
or that aspect; and that it is okay for that branch to even have special
conveniences for configuration.  I didn't mean to suggest that different
programming conventions would okay for that branch. Conventions are things
like style of C used, that there tests and that they are supposed to always
work when they are committed publicly.


>
>  Recently in libcdio-help someone mentioned
>> failing compilation on MinGW. I suggested trying the pbatard branch.
>> Things
>> are probably stalled here because the code doesn't compile for him.
>>
>
> I tested with MinGW-w64, and it works for me, so my expectation is that it
> should work for everyone with MinGW-w64. If we find some MinGW-w64
> platforms don't, then we can see what we can do to fix them.
>
> Also from the README.libcdio:
> "It has been reported that MinGW (http://www.mingw.org/) also works, but
> it is possible you may encounter more problems there."
>
> Unless integration has been completed, the above applies 100%, even in the
> -pbatard branch.
>

Ok.


>
> But I wasn't subscribed to libcdio-help but I am now. I'll reply to Kyle
> when I have a chance, because it looks to me like different versions of
> MinGW-w64 may use stdio.h headers that may have major differences, and we
> will need to identify what they are.


I see you have - thanks.


>
>
>  See
>> http://lists.gnu.org/archive/**html/libcdio-help/2012-02/**threads.html<http://lists.gnu.org/archive/html/libcdio-help/2012-02/threads.html>
>>  . But
>> I imagine if it had compiled, checking that tests pass in the pbatard
>> branch would have next been tried.
>>
>
> And at least half of these tests would have failed, because quite a few
> fixes are required to get them to pass with regular autogen.sh...
>
> IMO, the sooner we're done with integration, the better for MinGW users.
>
> Regards,
>
> /Pete
>
> PS:
>
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]