[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Libcdio-devel] [PATCH 2/5] Add extract ISO/UDF example

From: Pete Batard
Subject: Re: [Libcdio-devel] [PATCH 2/5] Add extract ISO/UDF example
Date: Mon, 13 Feb 2012 12:31:10 +0000
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:10.0.1) Gecko/20120208 Thunderbird/10.0.1

On 2012.02.13 07:33, Rocky Bernstein wrote:
Although I said in test-driven development, advocates suggest writing tests
first, so tests start out broken, by the time one commits code, the tests
should work.

Well, for UDF, that was pretty much the case. Test was broken and was simply added so that we know when the code fixes have been added that UDF has been fixed. Also, as a reminder, I am not planning to add more tests, as I really don't have time for that. I fixed the ones I found broken, but that's really as far as I'm going to go.

How was I or anyone else supposed to know that this was supposed to fail?
You said something about tests failing because of symbolic links.

For rockridge. I said nothing about UDF. I also never said the UDF test was supposed to work especially since I mentioned that the next series of patches would fix testing, and that the core fixes would come after that. The implication was that, since fixes were needed for tests, some tests may not be working, and even after tests were fixed, some tests might still not PASS if they required core changes.

But this
failing tests has nothing to do with that. So I spent time with gdb which I
probably didn't have to do

I'm afraid you didn't, unless you found that it was also broken in -pbatard.
If it works in -pbatard, then it'll work in mainline and you just have to wait for the patch to be submitted (or request it if you want to have it before that).

When someone wants me to review patches for integration, one of my first
lines of defense is whether things are now broken. The process I use for
installing software is the check out or update the code, build it, *test*
it, perhaps try it uninstalled and finally install it. If the testing
doesn't work, there is a serious consideration on my part as to whether to
go forward.

Then we can't go with the plan I proposed, because I'm not planning to push the fixes to core, that are likely needed for the UDF test to pass, until this series and the next has been integrated...

My assumption was that you would take a couple of days to review and integrate patches to mainline, so that we could move to the next set and reduce the amount of time where tests in mainline may be broken --though as far as I know, only the UDF test was expected to be broken and the cue test would have been fixed with the next set => I didn't see as a serious issue at all, as long as integration happened in a timely manner.

Note that even after we fix the tests, MinGW will stay broken for some time, if it compiles at all, because fixes to core will be needed for most of the tests to PASS.
Thus, I don't see that as that different from UDF being temporarily broken.

If we go with your approach, then we should have the MinGW tests also working right away, but considering the amount of fixes that are needed to get there, I don't think that can fly.

As long as we're producing our best effort to bring mainline in check with a branch that we know has the fixes required, I really fail to see a problem. It's only a matter of whether you want to prevent git users of libcdio from seeing work-in-progress.

If that's the case, then I'd suggest you create a separate integration branch, that you can push to mainline wholesale once you're happy that everything has been completed to your liking. Doing it this way won't require any of us to spend additional time that could easily be spared, as I'm really not prepared to do extra when the end result from the integration exercise will be exactly the same. Adding stuff, such as marking a test "expected fail" only to have it reverted a few of days later doesn't strike me as a good time investment, sorry.

Also, I want to encourage people at any time to check out what is in git
and use and give feedback if something is wrong. It is not uncommon in my
projects for various packagers to release code to the general public based
on a point in time that the packager decided to check out the code in git.
But if we have code in there that isn't passing tests, how are *they*
supposed to know tests are supposed to fail?

They should see that integration is occurring, and if it does at a proper pace, they should find out that any test that was failing gets fixed in a matter of days. If that's not acceptable to you, then I think a separate branch should be created for this exercise, labelled "work in progress" or something. Also, the UDF test was just introduced. Something that was just introduced usually requires adjustments or may not even work at all, if it's integrated into bitesize chunks.

The proper way to mark a test as "not quite done" is to mark it skippable.
In this particular project which uses the built-in autotools mechansism,
the test should exit with with return code 77. Additionally it would be
nice to print a message to stderr giving a reason why the test is skipped.
For example: not finished yet, or not applicable for this OS or

I'm trying to minimize the time I should spend, and you were the one who introduced the UDF test. It may sound selfish, but I see this test as your responsibility rather than mine. If you are unhappy with the test, then I vote for removing altogether, since it's be the option that requires the least amount of time on my end.

My plan is to add the tests so that once we fix things, we can get some

Again backwards from the way I do development. The tests *are* my
first-line of validation. And especially when, as is going on here, there
are massive changes.  The tests are what gives me confidence that things
haven't gotten worse. Getting user-acceptance is absolutely important, but
I have found that fewer people are willing to try code in projects when the
tests (regularly) fail.

This approach would require me devoting more time than I planned. If we integrate in a timely manner, the concern above should not be justified. But if you think it is, then please create a separate branch for this integration exercise.

And just because there are branches doesn't necessarily mean it's okay for
tests to fail in the branches.

Well, the -pbatard branch was only supposed to concern itself with extraction and not with anything else. I explicitly stated, when I started with libcdio and you created the -pbatard branch, that I would disable almost everything not related to image extraction, and I also went on to create a specific as a result.
You agreed that it was reasonable.

Recently in libcdio-help someone mentioned
failing compilation on MinGW. I suggested trying the pbatard branch. Things
are probably stalled here because the code doesn't compile for him.

I tested with MinGW-w64, and it works for me, so my expectation is that it should work for everyone with MinGW-w64. If we find some MinGW-w64 platforms don't, then we can see what we can do to fix them.

Also from the README.libcdio:
"It has been reported that MinGW ( also works, but it is possible you may encounter more problems there."

Unless integration has been completed, the above applies 100%, even in the -pbatard branch.

But I wasn't subscribed to libcdio-help but I am now. I'll reply to Kyle when I have a chance, because it looks to me like different versions of MinGW-w64 may use stdio.h headers that may have major differences, and we will need to identify what they are.

See  . But
I imagine if it had compiled, checking that tests pass in the pbatard
branch would have next been tried.

And at least half of these tests would have failed, because quite a few fixes are required to get them to pass with regular

IMO, the sooner we're done with integration, the better for MinGW users.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]